Jan 22 11:47:45 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 11:47:45 crc kubenswrapper[5120]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:47:45 crc kubenswrapper[5120]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 11:47:45 crc kubenswrapper[5120]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:47:45 crc kubenswrapper[5120]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:47:45 crc kubenswrapper[5120]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 11:47:45 crc kubenswrapper[5120]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.370025 5120 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376063 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376106 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376118 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376130 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376140 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376149 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376158 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376167 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376176 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376187 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376196 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376205 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376215 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376224 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376234 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376242 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376251 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376260 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376269 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376280 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376289 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376298 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376306 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376317 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376327 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376336 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376345 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376354 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376363 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376383 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376393 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376403 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376412 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376421 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376430 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376439 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376448 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376458 5120 feature_gate.go:328] unrecognized feature gate: Example Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376467 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376476 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376486 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376494 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376504 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376513 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376522 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376532 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376543 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376552 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376561 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376569 5120 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376579 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376588 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376597 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376610 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376623 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376635 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376646 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376656 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376667 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376679 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376688 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376698 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376708 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376720 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376730 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376739 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376748 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376758 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376767 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376776 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376790 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376805 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376815 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376825 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376834 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376849 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376860 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376871 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376881 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376891 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376901 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376911 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376921 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376930 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376940 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.376949 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378341 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378379 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378391 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378403 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378414 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378425 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378436 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378446 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378458 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378469 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378480 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378490 5120 feature_gate.go:328] unrecognized feature gate: Example Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378500 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378510 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378520 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378531 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378540 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378550 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378559 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378569 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378579 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378591 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378602 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378612 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378621 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378631 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378640 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378650 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378659 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378669 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378678 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378691 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378703 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378715 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378725 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378735 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378744 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378754 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378763 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378773 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378783 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378792 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378833 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378844 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378854 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378863 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378873 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378885 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378894 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378903 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378912 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378921 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378931 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378941 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.378988 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379001 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379010 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379020 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379030 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379040 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379052 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379061 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379071 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379084 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379097 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379108 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379118 5120 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379128 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379138 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379148 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379157 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379167 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379176 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379186 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379198 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379208 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379217 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379226 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379235 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379244 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379257 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379266 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379275 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379285 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379294 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.379304 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379495 5120 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379523 5120 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379543 5120 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379556 5120 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379570 5120 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379580 5120 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379593 5120 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379607 5120 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379618 5120 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379629 5120 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379640 5120 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379652 5120 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379663 5120 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379673 5120 flags.go:64] FLAG: --cgroup-root="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379683 5120 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379694 5120 flags.go:64] FLAG: --client-ca-file="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379704 5120 flags.go:64] FLAG: --cloud-config="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379714 5120 flags.go:64] FLAG: --cloud-provider="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379723 5120 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379736 5120 flags.go:64] FLAG: --cluster-domain="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379747 5120 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379798 5120 flags.go:64] FLAG: --config-dir="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379812 5120 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379825 5120 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379839 5120 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379849 5120 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379860 5120 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379871 5120 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379881 5120 flags.go:64] FLAG: --contention-profiling="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379892 5120 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379902 5120 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379913 5120 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379924 5120 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379938 5120 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379952 5120 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.379998 5120 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380009 5120 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380020 5120 flags.go:64] FLAG: --enable-server="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380031 5120 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380047 5120 flags.go:64] FLAG: --event-burst="100" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380058 5120 flags.go:64] FLAG: --event-qps="50" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380069 5120 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380079 5120 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380089 5120 flags.go:64] FLAG: --eviction-hard="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380103 5120 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380113 5120 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380124 5120 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380135 5120 flags.go:64] FLAG: --eviction-soft="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380145 5120 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380155 5120 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380165 5120 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380176 5120 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380188 5120 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380198 5120 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380209 5120 flags.go:64] FLAG: --feature-gates="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380222 5120 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380233 5120 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380244 5120 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380255 5120 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380265 5120 flags.go:64] FLAG: --healthz-port="10248" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380275 5120 flags.go:64] FLAG: --help="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380286 5120 flags.go:64] FLAG: --hostname-override="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380296 5120 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380306 5120 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380317 5120 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380326 5120 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380336 5120 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380349 5120 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380359 5120 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380369 5120 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380379 5120 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380389 5120 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380401 5120 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380411 5120 flags.go:64] FLAG: --kube-reserved="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380422 5120 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380432 5120 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380443 5120 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380454 5120 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380464 5120 flags.go:64] FLAG: --lock-file="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380474 5120 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380484 5120 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380495 5120 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380512 5120 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380522 5120 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380534 5120 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380545 5120 flags.go:64] FLAG: --logging-format="text" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380555 5120 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380566 5120 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380577 5120 flags.go:64] FLAG: --manifest-url="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380587 5120 flags.go:64] FLAG: --manifest-url-header="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380601 5120 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380612 5120 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380625 5120 flags.go:64] FLAG: --max-pods="110" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380636 5120 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380646 5120 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380655 5120 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380665 5120 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380676 5120 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380686 5120 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380697 5120 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380726 5120 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380736 5120 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380746 5120 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380756 5120 flags.go:64] FLAG: --pod-cidr="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380767 5120 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380784 5120 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380794 5120 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380805 5120 flags.go:64] FLAG: --pods-per-core="0" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380815 5120 flags.go:64] FLAG: --port="10250" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380826 5120 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380836 5120 flags.go:64] FLAG: --provider-id="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380845 5120 flags.go:64] FLAG: --qos-reserved="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380855 5120 flags.go:64] FLAG: --read-only-port="10255" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380866 5120 flags.go:64] FLAG: --register-node="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380876 5120 flags.go:64] FLAG: --register-schedulable="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380886 5120 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380906 5120 flags.go:64] FLAG: --registry-burst="10" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380917 5120 flags.go:64] FLAG: --registry-qps="5" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380927 5120 flags.go:64] FLAG: --reserved-cpus="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380938 5120 flags.go:64] FLAG: --reserved-memory="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.380951 5120 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381008 5120 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381019 5120 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381029 5120 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381039 5120 flags.go:64] FLAG: --runonce="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381049 5120 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381059 5120 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381070 5120 flags.go:64] FLAG: --seccomp-default="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381081 5120 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381091 5120 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381102 5120 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381112 5120 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381123 5120 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381135 5120 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381146 5120 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381155 5120 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381165 5120 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381176 5120 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381187 5120 flags.go:64] FLAG: --system-cgroups="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381197 5120 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381215 5120 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381225 5120 flags.go:64] FLAG: --tls-cert-file="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381234 5120 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381248 5120 flags.go:64] FLAG: --tls-min-version="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381258 5120 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381268 5120 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381278 5120 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381289 5120 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381299 5120 flags.go:64] FLAG: --v="2" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381315 5120 flags.go:64] FLAG: --version="false" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381328 5120 flags.go:64] FLAG: --vmodule="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381342 5120 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.381353 5120 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381600 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381618 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381629 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381639 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381649 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381659 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381668 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381678 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381687 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381697 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381706 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381716 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381726 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381737 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381746 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381756 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381765 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381774 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381783 5120 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381792 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381801 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381810 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381819 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381829 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381838 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381848 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381857 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381867 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381876 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381886 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381896 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381906 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381915 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381923 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381936 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381947 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.381994 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382004 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382013 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382023 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382033 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382042 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382051 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382061 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382070 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382080 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382092 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382101 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382111 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382121 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382130 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382139 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382148 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382157 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382167 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382176 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382185 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382194 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382203 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382212 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382228 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382238 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382247 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382256 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382267 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382276 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382285 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382295 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382303 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382313 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382322 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382332 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382345 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382357 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382367 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382378 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382392 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382401 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382411 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382422 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382432 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382441 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382449 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382604 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382613 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.382623 5120 feature_gate.go:328] unrecognized feature gate: Example Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.382652 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.393162 5120 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.393199 5120 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393271 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393279 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393285 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393289 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393294 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393299 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393304 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393309 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393313 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393317 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393322 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393327 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393331 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393336 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393340 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393344 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393349 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393353 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393358 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393362 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393367 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393372 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393377 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393382 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393387 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393391 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393397 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393401 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393406 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393411 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393415 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393421 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393425 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393430 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393434 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393439 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393443 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393448 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393604 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393608 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393612 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393617 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393621 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393626 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393630 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393635 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393639 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393644 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393648 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393653 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393657 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393662 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393666 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393672 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393678 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393684 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393689 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393695 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393701 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393706 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393714 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393723 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393729 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393736 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393743 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393750 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393756 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393761 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393768 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393774 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393780 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393785 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393790 5120 feature_gate.go:328] unrecognized feature gate: Example Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393795 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393799 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393806 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393813 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393820 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393827 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393833 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393839 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393846 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393853 5120 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393859 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393865 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.393872 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.393881 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394037 5120 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394047 5120 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394052 5120 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394056 5120 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394061 5120 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394067 5120 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394073 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394078 5120 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394083 5120 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394089 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394095 5120 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394100 5120 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394105 5120 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394110 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394114 5120 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394119 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394124 5120 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394129 5120 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394133 5120 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394138 5120 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394143 5120 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394148 5120 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394153 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394158 5120 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394163 5120 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394167 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394172 5120 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394177 5120 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394182 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394187 5120 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394193 5120 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394198 5120 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394203 5120 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394207 5120 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394211 5120 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394216 5120 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394220 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394226 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394232 5120 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394238 5120 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394243 5120 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394247 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394252 5120 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394258 5120 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394263 5120 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394268 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394272 5120 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394277 5120 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394282 5120 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394286 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394291 5120 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394295 5120 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394300 5120 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394304 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394309 5120 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394313 5120 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394318 5120 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394322 5120 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394326 5120 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394332 5120 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394337 5120 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394341 5120 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394346 5120 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394350 5120 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394355 5120 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394359 5120 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394364 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394368 5120 feature_gate.go:328] unrecognized feature gate: Example2 Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394372 5120 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394377 5120 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394382 5120 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394386 5120 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394391 5120 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394395 5120 feature_gate.go:328] unrecognized feature gate: Example Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394400 5120 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394404 5120 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394409 5120 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394414 5120 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394418 5120 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394423 5120 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394427 5120 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394431 5120 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394436 5120 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394441 5120 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394445 5120 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 22 11:47:45 crc kubenswrapper[5120]: W0122 11:47:45.394449 5120 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.394458 5120 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.394843 5120 server.go:962] "Client rotation is on, will bootstrap in background" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.397251 5120 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.400674 5120 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.400815 5120 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.401420 5120 server.go:1019] "Starting client certificate rotation" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.401557 5120 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.401630 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.408132 5120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.415200 5120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.415868 5120 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.426973 5120 log.go:25] "Validated CRI v1 runtime API" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.447688 5120 log.go:25] "Validated CRI v1 image API" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.449720 5120 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.451947 5120 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-22-11-41-41-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.452007 5120 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.470402 5120 manager.go:217] Machine: {Timestamp:2026-01-22 11:47:45.468968924 +0000 UTC m=+0.212917305 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:382cdad4-0171-4b64-8e1b-b8f3f02e6a19 BootID:60403ab6-2e1e-4736-9a34-cfc7e1924d0b Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:28:ea:a5 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:28:ea:a5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:31:a5:73 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1a:4f:06 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:75:82:59 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d5:7f:9d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:98:3f:ae:10:ec Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:1a:2f:14:ae:d3 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.470635 5120 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.470804 5120 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.472035 5120 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.472077 5120 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.472271 5120 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.472284 5120 container_manager_linux.go:306] "Creating device plugin manager" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.472310 5120 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.472510 5120 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.472872 5120 state_mem.go:36] "Initialized new in-memory state store" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.473052 5120 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.473590 5120 kubelet.go:491] "Attempting to sync node with API server" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.473727 5120 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.473749 5120 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.473763 5120 kubelet.go:397] "Adding apiserver pod source" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.473779 5120 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.476718 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.476849 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.477398 5120 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.477420 5120 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.482574 5120 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.482647 5120 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.484418 5120 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.484794 5120 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.485431 5120 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486200 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486246 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486262 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486277 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486289 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486311 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486325 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486346 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486362 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486387 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486408 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.486576 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.487313 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.487362 5120 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.488703 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.503119 5120 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.503213 5120 server.go:1295] "Started kubelet" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.503637 5120 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.503997 5120 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.504078 5120 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 22 11:47:45 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.504673 5120 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.505518 5120 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.505854 5120 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.506302 5120 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.506369 5120 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.506593 5120 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.505875 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d0b211eb0428c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.503167116 +0000 UTC m=+0.247115497,LastTimestamp:2026-01-22 11:47:45.503167116 +0000 UTC m=+0.247115497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.506804 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.508607 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.509062 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.509366 5120 server.go:317] "Adding debug handlers to kubelet server" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.510870 5120 factory.go:55] Registering systemd factory Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.510928 5120 factory.go:223] Registration of the systemd container factory successfully Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.511325 5120 factory.go:153] Registering CRI-O factory Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.511350 5120 factory.go:223] Registration of the crio container factory successfully Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.511444 5120 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.511467 5120 factory.go:103] Registering Raw factory Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.511480 5120 manager.go:1196] Started watching for new ooms in manager Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.512275 5120 manager.go:319] Starting recovery of all containers Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544647 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544707 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544722 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544739 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544753 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544764 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544776 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544806 5120 manager.go:324] Recovery completed Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.544794 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545118 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545147 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545196 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545211 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545246 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545257 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545273 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545285 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545297 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545324 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545335 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545352 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545365 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545377 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545388 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545406 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545418 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545436 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545451 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545465 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545491 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545506 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545532 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545547 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545561 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545598 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545614 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545630 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545646 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545663 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545679 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545698 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545758 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545782 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545797 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545809 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545822 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545841 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545857 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545883 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545899 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545934 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545946 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.545976 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546004 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546018 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546059 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546105 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546125 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546151 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546193 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546205 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546218 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546229 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546257 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546270 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546282 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546305 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546317 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546329 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546341 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546360 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546371 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546385 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546396 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546418 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546431 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546442 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546454 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546466 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546478 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546489 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546499 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546524 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546535 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546547 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546559 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546571 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546587 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546598 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546610 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546643 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546654 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546665 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546676 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546766 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546812 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546828 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546912 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546946 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546977 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.546992 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547003 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547017 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547029 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547040 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547051 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547078 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547089 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547100 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547111 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547124 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547135 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547146 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547186 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547219 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547231 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547244 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547256 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547267 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547291 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547303 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547315 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547339 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547350 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547362 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547394 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547406 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547432 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547444 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547455 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547476 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547487 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547521 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547543 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547573 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547586 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547597 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547608 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547629 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547660 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547670 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547682 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547694 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547716 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547728 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547740 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.547769 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548593 5120 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548620 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548634 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548654 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548665 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548676 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548689 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548734 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548746 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548756 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548767 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548787 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548808 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548824 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548838 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548867 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548881 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548894 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548905 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548917 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548929 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.548940 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549051 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549083 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549136 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549154 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549179 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549197 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549237 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549249 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549262 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549288 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549298 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549331 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549343 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549393 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549405 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549416 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549427 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549449 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549460 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549472 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549483 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549505 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549536 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549550 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549562 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549585 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549595 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549609 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549619 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549630 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549641 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549652 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549663 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549688 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549708 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549719 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549730 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549742 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549764 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549774 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549809 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549833 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549849 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549864 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549878 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549893 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549904 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549915 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549927 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549940 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549972 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.549990 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550001 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550030 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550051 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550067 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550078 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550114 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550126 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550141 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550152 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550163 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550174 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550193 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550204 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550216 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550228 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550239 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550252 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550270 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550288 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550305 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550321 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550337 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550354 5120 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550369 5120 reconstruct.go:97] "Volume reconstruction finished" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.550378 5120 reconciler.go:26] "Reconciler: start to sync state" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.557221 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.561546 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.561589 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.561599 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.562694 5120 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.562709 5120 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.562749 5120 state_mem.go:36] "Initialized new in-memory state store" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.567678 5120 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.569237 5120 policy_none.go:49] "None policy: Start" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.569255 5120 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.569267 5120 state_mem.go:35] "Initializing new in-memory state store" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.570388 5120 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.570447 5120 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.570490 5120 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.570502 5120 kubelet.go:2451] "Starting kubelet main sync loop" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.570635 5120 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.572787 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.607763 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.610412 5120 manager.go:341] "Starting Device Plugin manager" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.610655 5120 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.610673 5120 server.go:85] "Starting device plugin registration server" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.611041 5120 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.611058 5120 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.611289 5120 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.611403 5120 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.611419 5120 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.614181 5120 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.614230 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.671495 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.671725 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.672713 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.672769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.672785 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.676443 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.676701 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.676786 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.677330 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.677371 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.677384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.677851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.677890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.677905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.678374 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.678446 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.678520 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679111 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679142 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679153 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679120 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679712 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679807 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.679891 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.680337 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.680358 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.680367 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.680510 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.680586 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.680601 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.681612 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.681629 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.681653 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.682170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.682203 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.682235 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.682208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.682250 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.682265 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.683030 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.683065 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.683662 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.683701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.683717 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.703188 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.710480 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.711459 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.712174 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.712215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.712230 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.712254 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.712623 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.720623 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.731012 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.753200 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753340 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753399 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753434 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753459 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753493 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753513 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753531 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753564 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753584 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753648 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753716 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753744 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753760 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753836 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753863 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753918 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753930 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753975 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753986 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.753997 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754006 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754033 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754057 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754085 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754107 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754109 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754126 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754303 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.754478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.759376 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854841 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854885 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854905 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854925 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854944 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854974 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854988 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.854975 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855041 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855065 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855099 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855122 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855125 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855144 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855162 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855181 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855187 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855199 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855205 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855206 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855206 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855208 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855162 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855224 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855336 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855241 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855363 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855393 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855400 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855406 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.855524 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.913258 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.914103 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.914167 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.914188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:45 crc kubenswrapper[5120]: I0122 11:47:45.914213 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:47:45 crc kubenswrapper[5120]: E0122 11:47:45.914733 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.004040 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.021488 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:46 crc kubenswrapper[5120]: W0122 11:47:46.025643 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-35d544c493682aa956d8b47e25e0400bd0f9854531e43cd9f9bbe8659847154c WatchSource:0}: Error finding container 35d544c493682aa956d8b47e25e0400bd0f9854531e43cd9f9bbe8659847154c: Status 404 returned error can't find the container with id 35d544c493682aa956d8b47e25e0400bd0f9854531e43cd9f9bbe8659847154c Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.031541 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.031581 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:47:46 crc kubenswrapper[5120]: W0122 11:47:46.044823 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-926615f934e04fb89be16f58f95779e9a509dfa04e2eb2bfcf7c3916bca9b25f WatchSource:0}: Error finding container 926615f934e04fb89be16f58f95779e9a509dfa04e2eb2bfcf7c3916bca9b25f: Status 404 returned error can't find the container with id 926615f934e04fb89be16f58f95779e9a509dfa04e2eb2bfcf7c3916bca9b25f Jan 22 11:47:46 crc kubenswrapper[5120]: W0122 11:47:46.046084 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-d53ad787ade1e8bf8d01adaf479b52c71dae3b10da00877f0b14a6f7115ccab5 WatchSource:0}: Error finding container d53ad787ade1e8bf8d01adaf479b52c71dae3b10da00877f0b14a6f7115ccab5: Status 404 returned error can't find the container with id d53ad787ade1e8bf8d01adaf479b52c71dae3b10da00877f0b14a6f7115ccab5 Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.054048 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.060502 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 11:47:46 crc kubenswrapper[5120]: W0122 11:47:46.069836 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-4e740b882ca504677a8b69c9138683d9c616e2ae5ed5568f543ca659799263e3 WatchSource:0}: Error finding container 4e740b882ca504677a8b69c9138683d9c616e2ae5ed5568f543ca659799263e3: Status 404 returned error can't find the container with id 4e740b882ca504677a8b69c9138683d9c616e2ae5ed5568f543ca659799263e3 Jan 22 11:47:46 crc kubenswrapper[5120]: W0122 11:47:46.077020 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-d524d8874ae20f6cdc757aaaff7cffc98920fb72eeea0b2baaf152f0b384ecc8 WatchSource:0}: Error finding container d524d8874ae20f6cdc757aaaff7cffc98920fb72eeea0b2baaf152f0b384ecc8: Status 404 returned error can't find the container with id d524d8874ae20f6cdc757aaaff7cffc98920fb72eeea0b2baaf152f0b384ecc8 Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.111098 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.314872 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.316701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.316744 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.316759 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.316801 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.317272 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.427686 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.435343 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.489802 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.577214 5120 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466" exitCode=0 Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.577268 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.577436 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"d53ad787ade1e8bf8d01adaf479b52c71dae3b10da00877f0b14a6f7115ccab5"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.577588 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.579639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.579666 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.579675 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.579851 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.581321 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.581372 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"926615f934e04fb89be16f58f95779e9a509dfa04e2eb2bfcf7c3916bca9b25f"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.582781 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41" exitCode=0 Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.582874 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.582898 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"35d544c493682aa956d8b47e25e0400bd0f9854531e43cd9f9bbe8659847154c"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.583080 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.583639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.583687 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.583698 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.583945 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.584640 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b" exitCode=0 Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.584664 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.584702 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d524d8874ae20f6cdc757aaaff7cffc98920fb72eeea0b2baaf152f0b384ecc8"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.584831 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.585201 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.585225 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.585234 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.585387 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.585662 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586154 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586183 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586251 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586178 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586155 5120 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54" exitCode=0 Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586359 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"4e740b882ca504677a8b69c9138683d9c616e2ae5ed5568f543ca659799263e3"} Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586423 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586923 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586967 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:46 crc kubenswrapper[5120]: I0122 11:47:46.586981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.587130 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.615559 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.911980 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Jan 22 11:47:46 crc kubenswrapper[5120]: E0122 11:47:46.998457 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.117631 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.120629 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.120674 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.120686 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.120714 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:47:47 crc kubenswrapper[5120]: E0122 11:47:47.121486 5120 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.417006 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.594545 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84" exitCode=0 Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.594610 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.594795 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.595810 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.595845 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.595859 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:47 crc kubenswrapper[5120]: E0122 11:47:47.596088 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.613008 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.613099 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.615721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.615768 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.615783 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:47 crc kubenswrapper[5120]: E0122 11:47:47.616011 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.628250 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.628298 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.628314 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.628539 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.629126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.629158 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.629169 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:47 crc kubenswrapper[5120]: E0122 11:47:47.629348 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.631765 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.631791 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.634686 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1"} Jan 22 11:47:47 crc kubenswrapper[5120]: I0122 11:47:47.634822 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c"} Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.639334 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d"} Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.639469 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.640304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.640338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.640352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:48 crc kubenswrapper[5120]: E0122 11:47:48.640563 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.643027 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88"} Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.643083 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc"} Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.643105 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f"} Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.643341 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.644539 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.644580 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.644596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:48 crc kubenswrapper[5120]: E0122 11:47:48.644943 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.647606 5120 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad" exitCode=0 Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.647732 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.647950 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad"} Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.648225 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.648554 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.648592 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.648679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:48 crc kubenswrapper[5120]: E0122 11:47:48.648992 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.649383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.649416 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.649431 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:48 crc kubenswrapper[5120]: E0122 11:47:48.649632 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.721855 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.722848 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.722896 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.722978 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:48 crc kubenswrapper[5120]: I0122 11:47:48.723018 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.661889 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a"} Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662015 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662014 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203"} Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662181 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5"} Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662199 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662301 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662208 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8"} Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662718 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.662785 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:49 crc kubenswrapper[5120]: E0122 11:47:49.663174 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.663350 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.663438 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:49 crc kubenswrapper[5120]: I0122 11:47:49.663514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:49 crc kubenswrapper[5120]: E0122 11:47:49.664258 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.670053 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff"} Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.670299 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.671264 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.671337 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.671367 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:50 crc kubenswrapper[5120]: E0122 11:47:50.671869 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.736125 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.736462 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.738038 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.738135 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.738165 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:50 crc kubenswrapper[5120]: E0122 11:47:50.738878 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.902584 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.903037 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.905125 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.905217 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:50 crc kubenswrapper[5120]: I0122 11:47:50.905243 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:50 crc kubenswrapper[5120]: E0122 11:47:50.905818 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.674504 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.675390 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.675439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.675457 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:51 crc kubenswrapper[5120]: E0122 11:47:51.676125 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.738477 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.738838 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.740148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.740197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.740210 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:51 crc kubenswrapper[5120]: E0122 11:47:51.740569 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.747896 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.777730 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.778193 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.778287 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.779475 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.779540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:51 crc kubenswrapper[5120]: I0122 11:47:51.779558 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:51 crc kubenswrapper[5120]: E0122 11:47:51.780239 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:52 crc kubenswrapper[5120]: I0122 11:47:52.676901 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:52 crc kubenswrapper[5120]: I0122 11:47:52.677901 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:52 crc kubenswrapper[5120]: I0122 11:47:52.677947 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:52 crc kubenswrapper[5120]: I0122 11:47:52.677978 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:52 crc kubenswrapper[5120]: E0122 11:47:52.678329 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:53 crc kubenswrapper[5120]: I0122 11:47:53.098742 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:53 crc kubenswrapper[5120]: I0122 11:47:53.099205 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 11:47:53 crc kubenswrapper[5120]: I0122 11:47:53.099288 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:53 crc kubenswrapper[5120]: I0122 11:47:53.100709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:53 crc kubenswrapper[5120]: I0122 11:47:53.100782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:53 crc kubenswrapper[5120]: I0122 11:47:53.100797 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:53 crc kubenswrapper[5120]: E0122 11:47:53.101355 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.496866 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.497086 5120 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.497129 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.498105 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.498145 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.498155 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:55 crc kubenswrapper[5120]: E0122 11:47:55.498852 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.501663 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.501839 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.502586 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.502684 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.502706 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:55 crc kubenswrapper[5120]: E0122 11:47:55.503660 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.530468 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:47:55 crc kubenswrapper[5120]: E0122 11:47:55.614617 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.685144 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.685821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.685853 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.685862 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:55 crc kubenswrapper[5120]: E0122 11:47:55.686152 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.775002 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.775326 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.776295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.776361 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:55 crc kubenswrapper[5120]: I0122 11:47:55.776378 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:55 crc kubenswrapper[5120]: E0122 11:47:55.776829 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:56 crc kubenswrapper[5120]: I0122 11:47:56.055216 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 11:47:56 crc kubenswrapper[5120]: I0122 11:47:56.055469 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:47:56 crc kubenswrapper[5120]: I0122 11:47:56.056499 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:47:56 crc kubenswrapper[5120]: I0122 11:47:56.056570 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:47:56 crc kubenswrapper[5120]: I0122 11:47:56.056593 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:47:56 crc kubenswrapper[5120]: E0122 11:47:56.057377 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:47:57 crc kubenswrapper[5120]: E0122 11:47:57.419320 5120 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 22 11:47:57 crc kubenswrapper[5120]: I0122 11:47:57.491130 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 11:47:58 crc kubenswrapper[5120]: I0122 11:47:58.231338 5120 trace.go:236] Trace[1751676925]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:47:48.230) (total time: 10000ms): Jan 22 11:47:58 crc kubenswrapper[5120]: Trace[1751676925]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (11:47:58.231) Jan 22 11:47:58 crc kubenswrapper[5120]: Trace[1751676925]: [10.0009339s] [10.0009339s] END Jan 22 11:47:58 crc kubenswrapper[5120]: E0122 11:47:58.231378 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:47:58 crc kubenswrapper[5120]: I0122 11:47:58.497040 5120 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 11:47:58 crc kubenswrapper[5120]: I0122 11:47:58.497150 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 11:47:58 crc kubenswrapper[5120]: E0122 11:47:58.512828 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 22 11:47:58 crc kubenswrapper[5120]: I0122 11:47:58.588587 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:47:58 crc kubenswrapper[5120]: I0122 11:47:58.588656 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 11:47:58 crc kubenswrapper[5120]: I0122 11:47:58.598640 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:47:58 crc kubenswrapper[5120]: I0122 11:47:58.598732 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.652503 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.673892 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 11:48:01 crc kubenswrapper[5120]: E0122 11:48:01.715546 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.788039 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.789198 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.790895 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.790974 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.790989 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:01 crc kubenswrapper[5120]: E0122 11:48:01.791457 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:01 crc kubenswrapper[5120]: I0122 11:48:01.794345 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:02 crc kubenswrapper[5120]: E0122 11:48:02.221535 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:48:02 crc kubenswrapper[5120]: I0122 11:48:02.700938 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:02 crc kubenswrapper[5120]: I0122 11:48:02.701524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:02 crc kubenswrapper[5120]: I0122 11:48:02.701561 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:02 crc kubenswrapper[5120]: I0122 11:48:02.701571 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:02 crc kubenswrapper[5120]: E0122 11:48:02.701890 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:03 crc kubenswrapper[5120]: I0122 11:48:03.591226 5120 trace.go:236] Trace[1809451427]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:47:48.695) (total time: 14895ms): Jan 22 11:48:03 crc kubenswrapper[5120]: Trace[1809451427]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 14895ms (11:48:03.591) Jan 22 11:48:03 crc kubenswrapper[5120]: Trace[1809451427]: [14.895372328s] [14.895372328s] END Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.591279 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.591220 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b211eb0428c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.503167116 +0000 UTC m=+0.247115497,LastTimestamp:2026-01-22 11:47:45.503167116 +0000 UTC m=+0.247115497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: I0122 11:48:03.591333 5120 trace.go:236] Trace[685434573]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:47:49.173) (total time: 14417ms): Jan 22 11:48:03 crc kubenswrapper[5120]: Trace[685434573]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14417ms (11:48:03.591) Jan 22 11:48:03 crc kubenswrapper[5120]: Trace[685434573]: [14.417846919s] [14.417846919s] END Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.591411 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:48:03 crc kubenswrapper[5120]: I0122 11:48:03.591336 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:03 crc kubenswrapper[5120]: I0122 11:48:03.591862 5120 trace.go:236] Trace[486807286]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 11:47:49.570) (total time: 14020ms): Jan 22 11:48:03 crc kubenswrapper[5120]: Trace[486807286]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14020ms (11:48:03.591) Jan 22 11:48:03 crc kubenswrapper[5120]: Trace[486807286]: [14.020992051s] [14.020992051s] END Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.591920 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.592147 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.592476 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.594095 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.597471 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222c033f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,LastTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.603027 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b2125de880d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.623640077 +0000 UTC m=+0.367588418,LastTimestamp:2026-01-22 11:47:45.623640077 +0000 UTC m=+0.367588418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.609682 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222b9488\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.672749238 +0000 UTC m=+0.416697589,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.614349 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222bca99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.6727794 +0000 UTC m=+0.416727751,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.625549 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222c033f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222c033f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,LastTimestamp:2026-01-22 11:47:45.672791571 +0000 UTC m=+0.416739932,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.632212 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222b9488\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.677356131 +0000 UTC m=+0.421304472,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.636336 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222bca99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.677378262 +0000 UTC m=+0.421326603,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.643337 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222c033f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222c033f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,LastTimestamp:2026-01-22 11:47:45.677388163 +0000 UTC m=+0.421336504,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.648438 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222b9488\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.677873744 +0000 UTC m=+0.421822075,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.652215 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222bca99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.677897216 +0000 UTC m=+0.421845557,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.657279 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222c033f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222c033f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,LastTimestamp:2026-01-22 11:47:45.677915378 +0000 UTC m=+0.421863719,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.661437 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222b9488\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.679098986 +0000 UTC m=+0.423047327,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.667597 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222bca99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.679116347 +0000 UTC m=+0.423064688,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.674561 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222b9488\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.679132719 +0000 UTC m=+0.423081060,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.679280 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222bca99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.67914795 +0000 UTC m=+0.423096291,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.685553 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222c033f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222c033f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,LastTimestamp:2026-01-22 11:47:45.679157541 +0000 UTC m=+0.423105882,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.690284 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222c033f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222c033f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,LastTimestamp:2026-01-22 11:47:45.679181143 +0000 UTC m=+0.423129484,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.696514 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222b9488\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.6803523 +0000 UTC m=+0.424300641,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.700836 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222bca99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.680363331 +0000 UTC m=+0.424311672,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.704415 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222c033f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222c033f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561609023 +0000 UTC m=+0.305557364,LastTimestamp:2026-01-22 11:47:45.680372842 +0000 UTC m=+0.424321183,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.708558 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222b9488\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222b9488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.56158068 +0000 UTC m=+0.305529021,LastTimestamp:2026-01-22 11:47:45.680559987 +0000 UTC m=+0.424508338,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.713124 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d0b21222bca99\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d0b21222bca99 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:45.561594521 +0000 UTC m=+0.305542862,LastTimestamp:2026-01-22 11:47:45.68059516 +0000 UTC m=+0.424543511,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.719460 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b213e33dfb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.031886265 +0000 UTC m=+0.775834606,LastTimestamp:2026-01-22 11:47:46.031886265 +0000 UTC m=+0.775834606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.723850 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b213f4c80b4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.050277556 +0000 UTC m=+0.794225887,LastTimestamp:2026-01-22 11:47:46.050277556 +0000 UTC m=+0.794225887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.728160 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b213f4ca47c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.050286716 +0000 UTC m=+0.794235057,LastTimestamp:2026-01-22 11:47:46.050286716 +0000 UTC m=+0.794235057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.732281 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d0b2140d0f257 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.075734615 +0000 UTC m=+0.819682956,LastTimestamp:2026-01-22 11:47:46.075734615 +0000 UTC m=+0.819682956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.736031 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b214114fbc6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.080193478 +0000 UTC m=+0.824141819,LastTimestamp:2026-01-22 11:47:46.080193478 +0000 UTC m=+0.824141819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.740041 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b2158ac5f3c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.475990844 +0000 UTC m=+1.219939205,LastTimestamp:2026-01-22 11:47:46.475990844 +0000 UTC m=+1.219939205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.743546 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b2158acff43 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.476031811 +0000 UTC m=+1.219980152,LastTimestamp:2026-01-22 11:47:46.476031811 +0000 UTC m=+1.219980152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.747163 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d0b2158c24ddf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.477428191 +0000 UTC m=+1.221376522,LastTimestamp:2026-01-22 11:47:46.477428191 +0000 UTC m=+1.221376522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.751280 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2158cfcd18 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.478312728 +0000 UTC m=+1.222261069,LastTimestamp:2026-01-22 11:47:46.478312728 +0000 UTC m=+1.222261069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.755058 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b2158d313bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.478527423 +0000 UTC m=+1.222475754,LastTimestamp:2026-01-22 11:47:46.478527423 +0000 UTC m=+1.222475754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.759127 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b21595536c3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.487056067 +0000 UTC m=+1.231004408,LastTimestamp:2026-01-22 11:47:46.487056067 +0000 UTC m=+1.231004408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.766947 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b21595864d8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.487264472 +0000 UTC m=+1.231212823,LastTimestamp:2026-01-22 11:47:46.487264472 +0000 UTC m=+1.231212823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.772702 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b215968c65f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.488338015 +0000 UTC m=+1.232286356,LastTimestamp:2026-01-22 11:47:46.488338015 +0000 UTC m=+1.232286356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.776887 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d0b2159795819 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.489423897 +0000 UTC m=+1.233372238,LastTimestamp:2026-01-22 11:47:46.489423897 +0000 UTC m=+1.233372238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.781047 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2159cfd6d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.495092434 +0000 UTC m=+1.239040775,LastTimestamp:2026-01-22 11:47:46.495092434 +0000 UTC m=+1.239040775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.785870 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b2159d0185d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.495109213 +0000 UTC m=+1.239057554,LastTimestamp:2026-01-22 11:47:46.495109213 +0000 UTC m=+1.239057554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.789922 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b215ef73fa5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.581561253 +0000 UTC m=+1.325509594,LastTimestamp:2026-01-22 11:47:46.581561253 +0000 UTC m=+1.325509594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.796328 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b215f32d69e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.585466526 +0000 UTC m=+1.329414867,LastTimestamp:2026-01-22 11:47:46.585466526 +0000 UTC m=+1.329414867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.807220 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b215f3f06c9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.586265289 +0000 UTC m=+1.330213630,LastTimestamp:2026-01-22 11:47:46.586265289 +0000 UTC m=+1.330213630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.813067 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d0b215f59cb13 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.588019475 +0000 UTC m=+1.331967816,LastTimestamp:2026-01-22 11:47:46.588019475 +0000 UTC m=+1.331967816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.818286 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b216c4b9c26 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.805193766 +0000 UTC m=+1.549142107,LastTimestamp:2026-01-22 11:47:46.805193766 +0000 UTC m=+1.549142107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.823781 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b216c4d9225 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.805322277 +0000 UTC m=+1.549270628,LastTimestamp:2026-01-22 11:47:46.805322277 +0000 UTC m=+1.549270628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.828312 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d0b216c4dbf38 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.805333816 +0000 UTC m=+1.549282157,LastTimestamp:2026-01-22 11:47:46.805333816 +0000 UTC m=+1.549282157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.833428 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b216c5107c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.805549 +0000 UTC m=+1.549497341,LastTimestamp:2026-01-22 11:47:46.805549 +0000 UTC m=+1.549497341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.837854 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b216c52533e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.805633854 +0000 UTC m=+1.549582195,LastTimestamp:2026-01-22 11:47:46.805633854 +0000 UTC m=+1.549582195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.842541 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b216ceacd2f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.815626543 +0000 UTC m=+1.559574874,LastTimestamp:2026-01-22 11:47:46.815626543 +0000 UTC m=+1.559574874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.847882 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d0b216cfd3bb8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.816834488 +0000 UTC m=+1.560782819,LastTimestamp:2026-01-22 11:47:46.816834488 +0000 UTC m=+1.560782819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.851867 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b216d0345f0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.81723032 +0000 UTC m=+1.561178651,LastTimestamp:2026-01-22 11:47:46.81723032 +0000 UTC m=+1.561178651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.855565 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b216d05fed7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.817408727 +0000 UTC m=+1.561357068,LastTimestamp:2026-01-22 11:47:46.817408727 +0000 UTC m=+1.561357068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.859527 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b216d08984d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.817579085 +0000 UTC m=+1.561527426,LastTimestamp:2026-01-22 11:47:46.817579085 +0000 UTC m=+1.561527426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.864049 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b216d14c328 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.818376488 +0000 UTC m=+1.562324839,LastTimestamp:2026-01-22 11:47:46.818376488 +0000 UTC m=+1.562324839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.869213 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b216d189339 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.818626361 +0000 UTC m=+1.562574702,LastTimestamp:2026-01-22 11:47:46.818626361 +0000 UTC m=+1.562574702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.873841 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b216d96810b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:46.826879243 +0000 UTC m=+1.570827584,LastTimestamp:2026-01-22 11:47:46.826879243 +0000 UTC m=+1.570827584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.883311 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b217f93f2a3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.128701603 +0000 UTC m=+1.872649944,LastTimestamp:2026-01-22 11:47:47.128701603 +0000 UTC m=+1.872649944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.888610 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b2180428a2b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.140143659 +0000 UTC m=+1.884092000,LastTimestamp:2026-01-22 11:47:47.140143659 +0000 UTC m=+1.884092000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.892509 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b2180507ecf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.141058255 +0000 UTC m=+1.885006606,LastTimestamp:2026-01-22 11:47:47.141058255 +0000 UTC m=+1.885006606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.897054 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b218dc01458 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.36647484 +0000 UTC m=+2.110423181,LastTimestamp:2026-01-22 11:47:47.36647484 +0000 UTC m=+2.110423181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.901597 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d0b218e6b17b1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.377682353 +0000 UTC m=+2.121630694,LastTimestamp:2026-01-22 11:47:47.377682353 +0000 UTC m=+2.121630694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.905649 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21910e2b77 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.421924215 +0000 UTC m=+2.165872546,LastTimestamp:2026-01-22 11:47:47.421924215 +0000 UTC m=+2.165872546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.910914 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b2191c38260 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.43380848 +0000 UTC m=+2.177756821,LastTimestamp:2026-01-22 11:47:47.43380848 +0000 UTC m=+2.177756821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.915019 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21921b5a44 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.43956538 +0000 UTC m=+2.183513721,LastTimestamp:2026-01-22 11:47:47.43956538 +0000 UTC m=+2.183513721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.923485 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b2192290bc3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.440462787 +0000 UTC m=+2.184411128,LastTimestamp:2026-01-22 11:47:47.440462787 +0000 UTC m=+2.184411128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.930316 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b21925c9ff0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.443843056 +0000 UTC m=+2.187791407,LastTimestamp:2026-01-22 11:47:47.443843056 +0000 UTC m=+2.187791407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.935645 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b21927e1f12 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.44603829 +0000 UTC m=+2.189986631,LastTimestamp:2026-01-22 11:47:47.44603829 +0000 UTC m=+2.189986631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.941569 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b219baf6c9e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.60026435 +0000 UTC m=+2.344212681,LastTimestamp:2026-01-22 11:47:47.60026435 +0000 UTC m=+2.344212681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.950564 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21a2acb073 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.717525619 +0000 UTC m=+2.461473960,LastTimestamp:2026-01-22 11:47:47.717525619 +0000 UTC m=+2.461473960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.957189 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b21a2ad4666 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.717564006 +0000 UTC m=+2.461512347,LastTimestamp:2026-01-22 11:47:47.717564006 +0000 UTC m=+2.461512347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.963206 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21a42f719e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.742871966 +0000 UTC m=+2.486820307,LastTimestamp:2026-01-22 11:47:47.742871966 +0000 UTC m=+2.486820307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.969329 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21a43eb242 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.743871554 +0000 UTC m=+2.487819895,LastTimestamp:2026-01-22 11:47:47.743871554 +0000 UTC m=+2.487819895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.974201 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b21a48d9d64 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.749043556 +0000 UTC m=+2.492991897,LastTimestamp:2026-01-22 11:47:47.749043556 +0000 UTC m=+2.492991897,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.983353 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21ad3ed154 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.894874452 +0000 UTC m=+2.638822793,LastTimestamp:2026-01-22 11:47:47.894874452 +0000 UTC m=+2.638822793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.988913 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21aef71892 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:47.92372853 +0000 UTC m=+2.667676871,LastTimestamp:2026-01-22 11:47:47.92372853 +0000 UTC m=+2.667676871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.994392 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b597a1e3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.034912739 +0000 UTC m=+2.778861080,LastTimestamp:2026-01-22 11:47:48.034912739 +0000 UTC m=+2.778861080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:03 crc kubenswrapper[5120]: E0122 11:48:03.998882 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b60649cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.042164685 +0000 UTC m=+2.786113026,LastTimestamp:2026-01-22 11:47:48.042164685 +0000 UTC m=+2.786113026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.004929 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b61b22f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,LastTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.010024 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2628526 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.249535782 +0000 UTC m=+2.993484123,LastTimestamp:2026-01-22 11:47:48.249535782 +0000 UTC m=+2.993484123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.018390 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2e176d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.257855188 +0000 UTC m=+3.001803529,LastTimestamp:2026-01-22 11:47:48.257855188 +0000 UTC m=+3.001803529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.021316 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21da4cc994 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.650764692 +0000 UTC m=+3.394713043,LastTimestamp:2026-01-22 11:47:48.650764692 +0000 UTC m=+3.394713043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.028687 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21e71abcf5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.865588469 +0000 UTC m=+3.609536820,LastTimestamp:2026-01-22 11:47:48.865588469 +0000 UTC m=+3.609536820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.036320 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21e79a3f4a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.873944906 +0000 UTC m=+3.617893247,LastTimestamp:2026-01-22 11:47:48.873944906 +0000 UTC m=+3.617893247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.042818 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21e7a5251b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.874659099 +0000 UTC m=+3.618607430,LastTimestamp:2026-01-22 11:47:48.874659099 +0000 UTC m=+3.618607430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.049268 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21f1b9e069 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.043789929 +0000 UTC m=+3.787738280,LastTimestamp:2026-01-22 11:47:49.043789929 +0000 UTC m=+3.787738280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.055182 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21f2565f81 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.054046081 +0000 UTC m=+3.797994422,LastTimestamp:2026-01-22 11:47:49.054046081 +0000 UTC m=+3.797994422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.061310 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b21f26570ec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.05503358 +0000 UTC m=+3.798981931,LastTimestamp:2026-01-22 11:47:49.05503358 +0000 UTC m=+3.798981931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.066300 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b22040137cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.350455247 +0000 UTC m=+4.094403588,LastTimestamp:2026-01-22 11:47:49.350455247 +0000 UTC m=+4.094403588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.071716 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2204b9eaf6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.362559734 +0000 UTC m=+4.106508075,LastTimestamp:2026-01-22 11:47:49.362559734 +0000 UTC m=+4.106508075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.076491 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2204c862ef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.363507951 +0000 UTC m=+4.107456312,LastTimestamp:2026-01-22 11:47:49.363507951 +0000 UTC m=+4.107456312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.084322 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2211154bca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.56987489 +0000 UTC m=+4.313823241,LastTimestamp:2026-01-22 11:47:49.56987489 +0000 UTC m=+4.313823241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.088450 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52820->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.088460 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.088572 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52820->192.168.126.11:17697: read: connection reset by peer" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.088656 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.089018 5120 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.089086 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.089616 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2211cd7aa1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.581945505 +0000 UTC m=+4.325893846,LastTimestamp:2026-01-22 11:47:49.581945505 +0000 UTC m=+4.325893846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.093973 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2211dc7c94 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.582929044 +0000 UTC m=+4.326877385,LastTimestamp:2026-01-22 11:47:49.582929044 +0000 UTC m=+4.326877385,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.099071 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b222113b4cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.838206159 +0000 UTC m=+4.582154510,LastTimestamp:2026-01-22 11:47:49.838206159 +0000 UTC m=+4.582154510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.102562 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d0b2221f9391b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:49.853247771 +0000 UTC m=+4.597196122,LastTimestamp:2026-01-22 11:47:49.853247771 +0000 UTC m=+4.597196122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.108404 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-controller-manager-crc.188d0b2425302a1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.49711465 +0000 UTC m=+13.241063001,LastTimestamp:2026-01-22 11:47:58.49711465 +0000 UTC m=+13.241063001,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.112215 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d0b242531b5e4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.497215972 +0000 UTC m=+13.241164333,LastTimestamp:2026-01-22 11:47:58.497215972 +0000 UTC m=+13.241164333,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.116596 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa49c98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 11:48:04 crc kubenswrapper[5120]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:48:04 crc kubenswrapper[5120]: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588632216 +0000 UTC m=+13.332580557,LastTimestamp:2026-01-22 11:47:58.588632216 +0000 UTC m=+13.332580557,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.120059 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa552e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588678887 +0000 UTC m=+13.332627228,LastTimestamp:2026-01-22 11:47:58.588678887 +0000 UTC m=+13.332627228,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.124230 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b242aa49c98\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa49c98 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 22 11:48:04 crc kubenswrapper[5120]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 11:48:04 crc kubenswrapper[5120]: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588632216 +0000 UTC m=+13.332580557,LastTimestamp:2026-01-22 11:47:58.598695578 +0000 UTC m=+13.342643949,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.129840 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b242aa552e7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b242aa552e7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:58.588678887 +0000 UTC m=+13.332627228,LastTimestamp:2026-01-22 11:47:58.598760579 +0000 UTC m=+13.342708960,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.135011 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b2572763d10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:52820->192.168.126.11:17697: read: connection reset by peer Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088519952 +0000 UTC m=+18.832468303,LastTimestamp:2026-01-22 11:48:04.088519952 +0000 UTC m=+18.832468303,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.141846 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b257277916c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52820->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088607084 +0000 UTC m=+18.832555435,LastTimestamp:2026-01-22 11:48:04.088607084 +0000 UTC m=+18.832555435,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.152566 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b257277a256 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088611414 +0000 UTC m=+18.832559755,LastTimestamp:2026-01-22 11:48:04.088611414 +0000 UTC m=+18.832559755,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.156729 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b257278cb38 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52822->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.088687416 +0000 UTC m=+18.832635757,LastTimestamp:2026-01-22 11:48:04.088687416 +0000 UTC m=+18.832635757,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.160774 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 22 11:48:04 crc kubenswrapper[5120]: &Event{ObjectMeta:{kube-apiserver-crc.188d0b25727e7a8f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 22 11:48:04 crc kubenswrapper[5120]: body: Jan 22 11:48:04 crc kubenswrapper[5120]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.089059983 +0000 UTC m=+18.833008324,LastTimestamp:2026-01-22 11:48:04.089059983 +0000 UTC m=+18.833008324,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 22 11:48:04 crc kubenswrapper[5120]: > Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.165630 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b25727f89bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:04.089129405 +0000 UTC m=+18.833077746,LastTimestamp:2026-01-22 11:48:04.089129405 +0000 UTC m=+18.833077746,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.493288 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.713594 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.716079 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88" exitCode=255 Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.716245 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88"} Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.716635 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.724143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.724310 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.724584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.725468 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:04 crc kubenswrapper[5120]: I0122 11:48:04.725893 5120 scope.go:117] "RemoveContainer" containerID="b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88" Jan 22 11:48:04 crc kubenswrapper[5120]: E0122 11:48:04.757254 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21b61b22f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b61b22f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,LastTimestamp:2026-01-22 11:48:04.727664564 +0000 UTC m=+19.471612905,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.229800 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2628526\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2628526 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.249535782 +0000 UTC m=+2.993484123,LastTimestamp:2026-01-22 11:48:05.224327746 +0000 UTC m=+19.968276087,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.242737 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2e176d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2e176d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.257855188 +0000 UTC m=+3.001803529,LastTimestamp:2026-01-22 11:48:05.236185438 +0000 UTC m=+19.980133779,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.492238 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.501929 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.502157 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.503315 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.503365 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.503381 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.503733 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.506121 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.507154 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.615081 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.720999 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.723481 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092"} Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.723626 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.723760 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.724358 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.724397 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.724407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.724729 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.725061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.725244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:05 crc kubenswrapper[5120]: I0122 11:48:05.725596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:05 crc kubenswrapper[5120]: E0122 11:48:05.726266 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.080649 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.080923 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.081703 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.081742 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.081753 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.082172 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.114150 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.494351 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.728622 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.730049 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.732753 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" exitCode=255 Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.732850 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092"} Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.732943 5120 scope.go:117] "RemoveContainer" containerID="b5b0652ae23f85601c29c923290c2f9697de7cdb60bc871e65a366f54e67be88" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733035 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733544 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733556 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733514 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.733510 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.734154 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735371 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735392 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735409 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735431 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.735464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.736282 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.737239 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.737785 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.738317 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.743677 5120 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.792597 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793695 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793751 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793771 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:06 crc kubenswrapper[5120]: I0122 11:48:06.793810 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.802221 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:06 crc kubenswrapper[5120]: E0122 11:48:06.947015 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:48:07 crc kubenswrapper[5120]: I0122 11:48:07.494026 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:07 crc kubenswrapper[5120]: I0122 11:48:07.738278 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:08 crc kubenswrapper[5120]: E0122 11:48:08.120128 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:08 crc kubenswrapper[5120]: I0122 11:48:08.493472 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:08 crc kubenswrapper[5120]: E0122 11:48:08.710649 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:48:09 crc kubenswrapper[5120]: E0122 11:48:09.030525 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:48:09 crc kubenswrapper[5120]: I0122 11:48:09.494319 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:10 crc kubenswrapper[5120]: I0122 11:48:10.496548 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.497042 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.726051 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.726333 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727230 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727271 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727284 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:11 crc kubenswrapper[5120]: E0122 11:48:11.727647 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:11 crc kubenswrapper[5120]: I0122 11:48:11.727910 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:11 crc kubenswrapper[5120]: E0122 11:48:11.728115 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:11 crc kubenswrapper[5120]: E0122 11:48:11.736432 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:11.728084697 +0000 UTC m=+26.472033038,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:12 crc kubenswrapper[5120]: I0122 11:48:12.498429 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.202499 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203602 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203730 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.203783 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:13 crc kubenswrapper[5120]: E0122 11:48:13.219823 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:13 crc kubenswrapper[5120]: E0122 11:48:13.289862 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:48:13 crc kubenswrapper[5120]: I0122 11:48:13.498027 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:14 crc kubenswrapper[5120]: E0122 11:48:14.435744 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:48:14 crc kubenswrapper[5120]: I0122 11:48:14.494540 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.127877 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.498383 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.615769 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.724321 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.724765 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.725693 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.725813 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.725880 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.726375 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:15 crc kubenswrapper[5120]: I0122 11:48:15.726747 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.727028 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:15 crc kubenswrapper[5120]: E0122 11:48:15.735097 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:15.726996993 +0000 UTC m=+30.470945334,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:16 crc kubenswrapper[5120]: I0122 11:48:16.495156 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:16 crc kubenswrapper[5120]: E0122 11:48:16.618611 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:48:16 crc kubenswrapper[5120]: E0122 11:48:16.758536 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:48:17 crc kubenswrapper[5120]: I0122 11:48:17.495950 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:18 crc kubenswrapper[5120]: I0122 11:48:18.494788 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:19 crc kubenswrapper[5120]: I0122 11:48:19.493514 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.220982 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222622 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222697 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.222736 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:20 crc kubenswrapper[5120]: E0122 11:48:20.238289 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:20 crc kubenswrapper[5120]: I0122 11:48:20.496798 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:21 crc kubenswrapper[5120]: I0122 11:48:21.499310 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:22 crc kubenswrapper[5120]: E0122 11:48:22.133638 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:22 crc kubenswrapper[5120]: I0122 11:48:22.494456 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:23 crc kubenswrapper[5120]: I0122 11:48:23.494056 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:24 crc kubenswrapper[5120]: I0122 11:48:24.495548 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:25 crc kubenswrapper[5120]: I0122 11:48:25.494322 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:25 crc kubenswrapper[5120]: E0122 11:48:25.616478 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.493830 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.571424 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572298 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.572617 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.572882 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.578682 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21b61b22f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b61b22f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,LastTimestamp:2026-01-22 11:48:26.57380984 +0000 UTC m=+41.317758181,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.773903 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2628526\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2628526 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.249535782 +0000 UTC m=+2.993484123,LastTimestamp:2026-01-22 11:48:26.768086515 +0000 UTC m=+41.512034846,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.794434 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.796724 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b"} Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.797009 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.797428 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21c2e176d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21c2e176d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.257855188 +0000 UTC m=+3.001803529,LastTimestamp:2026-01-22 11:48:26.790226214 +0000 UTC m=+41.534174555,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.798084 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.798119 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:26 crc kubenswrapper[5120]: I0122 11:48:26.798129 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:26 crc kubenswrapper[5120]: E0122 11:48:26.798444 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239008 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239879 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239892 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.239912 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:27 crc kubenswrapper[5120]: E0122 11:48:27.247112 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:27 crc kubenswrapper[5120]: I0122 11:48:27.495693 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.495792 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.804755 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.805569 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809087 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" exitCode=255 Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809200 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b"} Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809287 5120 scope.go:117] "RemoveContainer" containerID="ebbae86fffc27cf71b33437f1449edab3f60609cf1c8191d9e9295da2d9c9092" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.809655 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.810928 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.811061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.811092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:28 crc kubenswrapper[5120]: E0122 11:48:28.812097 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:28 crc kubenswrapper[5120]: I0122 11:48:28.812672 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:28 crc kubenswrapper[5120]: E0122 11:48:28.813120 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:28 crc kubenswrapper[5120]: E0122 11:48:28.822414 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:28.813046112 +0000 UTC m=+43.556994483,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:29 crc kubenswrapper[5120]: E0122 11:48:29.143146 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:29 crc kubenswrapper[5120]: E0122 11:48:29.235921 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 22 11:48:29 crc kubenswrapper[5120]: I0122 11:48:29.498214 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:29 crc kubenswrapper[5120]: I0122 11:48:29.814369 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:30 crc kubenswrapper[5120]: I0122 11:48:30.498656 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.495705 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.726148 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.726609 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.728760 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.728820 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.728832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:31 crc kubenswrapper[5120]: E0122 11:48:31.729227 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:31 crc kubenswrapper[5120]: I0122 11:48:31.729501 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:31 crc kubenswrapper[5120]: E0122 11:48:31.729689 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:31 crc kubenswrapper[5120]: E0122 11:48:31.736102 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:31.729660931 +0000 UTC m=+46.473609272,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:32 crc kubenswrapper[5120]: E0122 11:48:32.054502 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 22 11:48:32 crc kubenswrapper[5120]: I0122 11:48:32.495562 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:33 crc kubenswrapper[5120]: E0122 11:48:33.457727 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 22 11:48:33 crc kubenswrapper[5120]: I0122 11:48:33.490984 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.247672 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251201 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251457 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.251499 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:34 crc kubenswrapper[5120]: E0122 11:48:34.268174 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:34 crc kubenswrapper[5120]: I0122 11:48:34.495244 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:35 crc kubenswrapper[5120]: I0122 11:48:35.494036 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:35 crc kubenswrapper[5120]: E0122 11:48:35.616942 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.147711 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.494819 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.797639 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.798065 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.799572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.799651 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.799681 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.800476 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:36 crc kubenswrapper[5120]: I0122 11:48:36.800920 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.801390 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:36 crc kubenswrapper[5120]: E0122 11:48:36.808843 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b261065b15d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b261065b15d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:48:06.738235741 +0000 UTC m=+21.482184122,LastTimestamp:2026-01-22 11:48:36.801323832 +0000 UTC m=+51.545272213,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:37 crc kubenswrapper[5120]: I0122 11:48:37.493618 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:38 crc kubenswrapper[5120]: I0122 11:48:38.495546 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:39 crc kubenswrapper[5120]: I0122 11:48:39.494564 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.495807 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.744086 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.744681 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.745792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.745934 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:40 crc kubenswrapper[5120]: I0122 11:48:40.746104 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:40 crc kubenswrapper[5120]: E0122 11:48:40.746492 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.268344 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269493 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269508 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.269537 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:41 crc kubenswrapper[5120]: E0122 11:48:41.278648 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:41 crc kubenswrapper[5120]: I0122 11:48:41.493345 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:41 crc kubenswrapper[5120]: E0122 11:48:41.965935 5120 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 22 11:48:42 crc kubenswrapper[5120]: I0122 11:48:42.494018 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:43 crc kubenswrapper[5120]: E0122 11:48:43.152097 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:43 crc kubenswrapper[5120]: I0122 11:48:43.495019 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:44 crc kubenswrapper[5120]: I0122 11:48:44.495085 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:45 crc kubenswrapper[5120]: I0122 11:48:45.496348 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:45 crc kubenswrapper[5120]: E0122 11:48:45.617975 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:46 crc kubenswrapper[5120]: I0122 11:48:46.493897 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:47 crc kubenswrapper[5120]: I0122 11:48:47.494307 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.279611 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281318 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281349 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.281408 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:48 crc kubenswrapper[5120]: E0122 11:48:48.298630 5120 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 22 11:48:48 crc kubenswrapper[5120]: I0122 11:48:48.496555 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.494064 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.571706 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572262 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572479 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572503 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.572512 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.572763 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573108 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.573523 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.573788 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.580615 5120 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d0b21b61b22f8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d0b21b61b22f8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:47:48.043531 +0000 UTC m=+2.787479341,LastTimestamp:2026-01-22 11:48:49.575127869 +0000 UTC m=+64.319076210,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.870237 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.872175 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f"} Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.872420 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.873096 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.873184 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:49 crc kubenswrapper[5120]: I0122 11:48:49.873205 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:49 crc kubenswrapper[5120]: E0122 11:48:49.873852 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:50 crc kubenswrapper[5120]: E0122 11:48:50.158438 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.495010 5120 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.662058 5120 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dp5b7" Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.667433 5120 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-dp5b7" Jan 22 11:48:50 crc kubenswrapper[5120]: I0122 11:48:50.708136 5120 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 11:48:51 crc kubenswrapper[5120]: I0122 11:48:51.401837 5120 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 11:48:51 crc kubenswrapper[5120]: I0122 11:48:51.669012 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-21 11:43:50 +0000 UTC" deadline="2026-02-14 08:50:26.033044572 +0000 UTC" Jan 22 11:48:51 crc kubenswrapper[5120]: I0122 11:48:51.669101 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="549h1m34.363948101s" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.881529 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.882008 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883423 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" exitCode=255 Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f"} Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883525 5120 scope.go:117] "RemoveContainer" containerID="42b2d68814b3d5e68556995825f0318ea28c47c2cd43a1dff298d7157752167b" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.883707 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.884240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.884276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.884287 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:52 crc kubenswrapper[5120]: E0122 11:48:52.884723 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:52 crc kubenswrapper[5120]: I0122 11:48:52.885080 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:48:52 crc kubenswrapper[5120]: E0122 11:48:52.885323 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:53 crc kubenswrapper[5120]: I0122 11:48:53.888191 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.299706 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300698 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300793 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.300992 5120 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.310927 5120 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.311287 5120 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.311317 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314603 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314662 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314681 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.314695 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.332385 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.340454 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.350885 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359041 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359108 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359123 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359142 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.359155 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.373617 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382682 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382691 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382707 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:48:55 crc kubenswrapper[5120]: I0122 11:48:55.382719 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:48:55Z","lastTransitionTime":"2026-01-22T11:48:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.396190 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.396359 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.396388 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.497206 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.597826 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.619472 5120 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.698265 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.798714 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:55 crc kubenswrapper[5120]: E0122 11:48:55.899467 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.000249 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.100387 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.200697 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.301454 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.401921 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.502265 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.603158 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.703863 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.804813 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:56 crc kubenswrapper[5120]: E0122 11:48:56.905991 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.006288 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.107486 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.208595 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.308848 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.409994 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.510750 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.610855 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.711473 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.811551 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:57 crc kubenswrapper[5120]: E0122 11:48:57.912020 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.012729 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.113119 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.214147 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.314694 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.415156 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.516225 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.616643 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.716801 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.817462 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:58 crc kubenswrapper[5120]: E0122 11:48:58.918282 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.018731 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.119072 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.219986 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.320514 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.420837 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.520976 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.621775 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.722228 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.822786 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.873091 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.873376 5120 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.874205 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.874269 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.874284 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.874782 5120 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 22 11:48:59 crc kubenswrapper[5120]: I0122 11:48:59.875082 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.875386 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:48:59 crc kubenswrapper[5120]: E0122 11:48:59.923935 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.024417 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.124580 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.224719 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.325818 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.426638 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.527732 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.627829 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.727995 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.829038 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:00 crc kubenswrapper[5120]: E0122 11:49:00.930008 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.030916 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.131215 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.231368 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.332161 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.432592 5120 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.485608 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.508889 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.521732 5120 apiserver.go:52] "Watching apiserver" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.524067 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.528083 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.528696 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wrdkl","openshift-multus/multus-4lzht","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/iptables-alerter-5jnd7","openshift-machine-config-operator/machine-config-daemon-dq269","openshift-multus/multus-additional-cni-plugins-rg989","openshift-image-registry/node-ca-tf9nb","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-node-2mf7v","openshift-multus/network-metrics-daemon-ldwx4","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.530166 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.532397 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.532478 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.532591 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534606 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534626 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.534669 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.535325 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.536046 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.536488 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.536564 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.537611 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.538874 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.543176 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.544060 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.544764 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.545913 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.546252 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.546438 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.547778 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.549126 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.554018 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.570419 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.585427 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.597538 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.608805 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.621463 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.623629 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.626363 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.629380 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.630391 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.632769 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633589 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633557 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633588 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.633881 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.634025 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.634086 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.634178 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.636137 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638635 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638664 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.638679 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640364 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640564 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640671 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.640901 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.647482 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.650361 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.652316 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653527 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653752 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653797 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653861 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.653835 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654014 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654160 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654045 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654431 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.654457 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.655513 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.655583 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.655667 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.655847 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.658040 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.658099 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.658146 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.659686 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.660255 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.663796 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.667300 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.667359 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.667540 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.679258 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686044 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686086 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-hosts-file\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686117 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686171 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686195 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686214 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686233 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686292 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686310 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686329 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686353 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686371 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686392 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-tmp-dir\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686412 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcrk\" (UniqueName: \"kubernetes.io/projected/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-kube-api-access-dgcrk\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686485 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.686512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687176 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687298 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.187270179 +0000 UTC m=+76.931218530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687460 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.687544 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.187528025 +0000 UTC m=+76.931476556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.687816 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.688281 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.689068 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.690741 5120 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.693519 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706222 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706321 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706348 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.706514 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.20647829 +0000 UTC m=+76.950426671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707396 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707450 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707473 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.707562 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.207525295 +0000 UTC m=+76.951473676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.708020 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.708392 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.709664 5120 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.712228 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.716798 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.717521 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.721006 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.723197 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.723295 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.725114 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.726137 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.738099 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741172 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.741202 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.758837 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.775186 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.785638 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.786871 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787104 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787167 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787198 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787218 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787232 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787267 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787283 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787298 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787316 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787397 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787424 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787430 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787477 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787498 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787514 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787531 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787548 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787564 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787579 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787596 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787613 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787636 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787652 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787691 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787708 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787726 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787742 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787760 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787774 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787792 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787806 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787824 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787841 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787889 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787906 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787921 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787937 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.787976 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788000 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788017 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788054 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788070 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788085 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788115 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788131 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788150 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788170 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788186 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788203 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788218 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788235 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788252 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788267 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788282 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788297 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788317 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788333 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788349 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788366 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788384 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788404 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788422 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788441 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788459 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788483 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788500 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788515 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788531 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788548 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788565 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788580 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788597 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788615 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788632 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788650 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788666 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788683 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788700 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788718 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788736 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788753 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788770 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788787 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788807 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788824 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788841 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788857 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788877 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788893 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788909 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788926 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788942 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788975 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788994 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789012 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789031 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789068 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789724 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789832 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789892 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789944 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790082 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790355 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790400 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790436 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790480 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790522 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790555 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790646 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793702 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793761 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793807 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793848 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793882 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793924 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793981 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794024 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794078 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794603 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794677 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794716 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794753 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794789 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794974 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795013 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795083 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795253 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788399 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795372 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788603 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.788666 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789067 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789301 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789700 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.789978 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790003 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790117 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790102 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790127 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790710 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.790755 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791112 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791126 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791436 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791625 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791814 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791820 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791878 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.791839 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792183 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792297 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792338 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792625 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792654 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.792675 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793255 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793441 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793461 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.793767 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794099 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794168 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794219 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794464 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794479 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794760 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795629 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.794762 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795079 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795152 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795760 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.795475 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796075 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796575 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796846 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797090 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797156 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797145 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797349 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797453 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797605 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797775 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797824 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.797850 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.796900 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.798152 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.798492 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.798823 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799111 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799456 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799523 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.799899 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800311 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800422 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800584 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800856 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.800911 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801025 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801181 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801221 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801260 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801331 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801495 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801717 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.801929 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802314 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802526 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802714 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802880 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.802904 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803092 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803156 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803373 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803418 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803550 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803589 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803671 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803600 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803656 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.803719 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804038 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804130 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804161 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804208 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804233 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804819 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.804977 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805095 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805599 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805608 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.805935 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806100 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806147 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806657 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806903 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806951 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807024 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807053 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807082 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807111 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807144 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807173 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807201 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807227 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807256 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807284 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807310 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807341 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807372 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807403 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807429 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807455 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807480 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807583 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807607 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807629 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807650 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807672 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807693 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807717 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807741 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807763 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807784 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807808 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807832 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807863 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807890 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807914 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807937 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807992 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808020 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808045 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808076 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808124 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808155 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808188 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808217 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808241 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808265 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808291 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808320 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808343 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808369 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808394 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808418 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808443 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808469 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808498 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808523 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808550 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808605 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808632 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808657 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808710 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808744 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808772 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808797 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808824 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808855 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808880 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808905 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808934 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808978 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809007 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.806908 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807233 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809102 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.807851 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808221 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808421 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808648 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.808812 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809246 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809140 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809378 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809425 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809574 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809604 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809746 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809785 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809919 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809947 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810134 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810478 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810505 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810533 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.810828 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811115 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811176 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811239 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811290 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811348 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811420 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811374 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811535 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.809052 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811774 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811808 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811823 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811847 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811869 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.811869 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812006 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-hosts-file\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812107 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-etc-kubernetes\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812147 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812184 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812237 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812278 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812351 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-system-cni-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812397 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-binary-copy\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812432 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812470 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdqkj\" (UniqueName: \"kubernetes.io/projected/f9f485fd-0793-40a0-abf8-12fd3b612c87-kube-api-access-wdqkj\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812502 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813362 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-tmp-dir\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgcrk\" (UniqueName: \"kubernetes.io/projected/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-kube-api-access-dgcrk\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813655 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-socket-dir-parent\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813680 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-multus\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813706 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-system-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813763 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-kubelet\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813783 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813800 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813822 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbgq\" (UniqueName: \"kubernetes.io/projected/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-kube-api-access-scbgq\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813892 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813980 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndcw\" (UniqueName: \"kubernetes.io/projected/dababdca-8afb-452f-865f-54de3aec21d9-kube-api-access-kndcw\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814003 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814067 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814090 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815674 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815767 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cnibin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-hostroot\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812239 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812440 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812664 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812726 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.812859 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.813180 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814485 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814693 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814851 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814735 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.814996 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815042 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815434 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815558 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815576 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816622 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7fj\" (UniqueName: \"kubernetes.io/projected/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-kube-api-access-zz7fj\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816768 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-tmp-dir\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816860 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815739 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.815884 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816229 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816242 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816427 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816710 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816784 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.816789 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.817157 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818015 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818246 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818506 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818707 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.818892 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819017 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819424 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819499 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819654 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9f485fd-0793-40a0-abf8-12fd3b612c87-serviceca\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819699 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-netns\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819734 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819764 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9f485fd-0793-40a0-abf8-12fd3b612c87-host\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819906 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820033 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.819603 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820142 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820305 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820341 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820361 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820886 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821024 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821638 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821919 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-bin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821754 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.821762 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.822198 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.322140653 +0000 UTC m=+77.066089024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822418 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822467 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822501 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-rootfs\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-k8s-cni-cncf-io\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822582 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-conf-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822614 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-cnibin\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822608 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822649 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-os-release\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822684 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822715 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822753 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-multus-certs\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822791 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822821 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822857 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-mcd-auth-proxy-config\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822886 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822923 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-os-release\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822976 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs4xp\" (UniqueName: \"kubernetes.io/projected/97df0621-ddba-4462-8134-59bc671c7351-kube-api-access-cs4xp\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823012 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823046 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823075 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-proxy-tls\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823105 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823159 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cni-binary-copy\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823192 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-daemon-config\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823359 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823396 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823416 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823436 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823457 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823476 5120 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823495 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823515 5120 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.820233 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-hosts-file\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823555 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822651 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.822882 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823115 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823216 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823325 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823610 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823705 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823622 5120 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823812 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823887 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823927 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823949 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.823995 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824008 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824047 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824110 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824133 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824089 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824220 5120 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824238 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824256 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824271 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824286 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824282 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824302 5120 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824321 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824338 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824352 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824366 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824379 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824394 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824398 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824410 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824438 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824450 5120 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824451 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824463 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824476 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824490 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824506 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824518 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824531 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824488 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824543 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824556 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824568 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824579 5120 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824591 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824603 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824605 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824617 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824681 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824761 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824782 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824800 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824818 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824836 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.824854 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825025 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825124 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825196 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825221 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825238 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825251 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825359 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825383 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825403 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825425 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825443 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825462 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825481 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825500 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825518 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825537 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825556 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825574 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825574 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825592 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825615 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825633 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825651 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825669 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825688 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825708 5120 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825727 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825752 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825759 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825775 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825798 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825816 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825834 5120 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825851 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825869 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825885 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825902 5120 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825920 5120 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825942 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.825995 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826015 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826032 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826050 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826068 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826088 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826105 5120 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826122 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826140 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826157 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826213 5120 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826233 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826252 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826269 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826286 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826307 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826325 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826343 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826359 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826375 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826392 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826410 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826429 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826453 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826470 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826488 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826505 5120 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826521 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826538 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826555 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826571 5120 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826587 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826605 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826623 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826641 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826663 5120 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826680 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826698 5120 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826719 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826736 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826753 5120 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826770 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826787 5120 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826805 5120 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826822 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826839 5120 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826858 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826875 5120 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826892 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826909 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826925 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826943 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.826982 5120 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827000 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827018 5120 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827035 5120 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827052 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827070 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827088 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827106 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827124 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827141 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827159 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827178 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827195 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827212 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827229 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827246 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827265 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827282 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827301 5120 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827320 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827337 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827354 5120 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827371 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827387 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827402 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827418 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827435 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827452 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827469 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827486 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827504 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827521 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827539 5120 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.827557 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.828010 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.828027 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.828469 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.830894 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.834388 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.834445 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.834546 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.835001 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.835918 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836157 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836198 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836322 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836377 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.836536 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837106 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837183 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837296 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837556 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.837903 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840232 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840232 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840583 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.840685 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.841123 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.841142 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.841739 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.846641 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.846852 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.847050 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgcrk\" (UniqueName: \"kubernetes.io/projected/eaa5719f-fed8-44ac-a759-d2c22d9a2a7f-kube-api-access-dgcrk\") pod \"node-resolver-wrdkl\" (UID: \"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\") " pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.847193 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.847346 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852437 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852495 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852515 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852548 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.852576 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.859842 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.861046 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.862083 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.865714 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.867717 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.879114 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.882005 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.886038 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.887650 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.890266 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.890744 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Jan 22 11:49:01 crc kubenswrapper[5120]: else Jan 22 11:49:01 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 22 11:49:01 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.891894 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 22 11:49:01 crc kubenswrapper[5120]: W0122 11:49:01.898171 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56 WatchSource:0}: Error finding container 89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56: Status 404 returned error can't find the container with id 89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56 Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.898578 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 22 11:49:01 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 22 11:49:01 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 22 11:49:01 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 22 11:49:01 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-port=9743 \ Jan 22 11:49:01 crc kubenswrapper[5120]: ${ho_enable} \ Jan 22 11:49:01 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-approver \ Jan 22 11:49:01 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Jan 22 11:49:01 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.901459 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.902183 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-webhook \ Jan 22 11:49:01 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.903365 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.903759 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.905061 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.908253 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"89fd87fcbdb16db0a35262776e2e8cda8e268b9cf22471a8b0af91d17737aa56"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.909521 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"2e04559dec16ab7018539cbee7830f09441da9e974cd81d09aceb8b51db915ee"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.910230 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.910838 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"0dd60b61ffd0d4d6a32efceb6f2e8ab66bb020d554438a82321a2b3ac810a3e0"} Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.911225 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 22 11:49:01 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 22 11:49:01 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 22 11:49:01 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 22 11:49:01 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --webhook-port=9743 \ Jan 22 11:49:01 crc kubenswrapper[5120]: ${ho_enable} \ Jan 22 11:49:01 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-approver \ Jan 22 11:49:01 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Jan 22 11:49:01 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.911660 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.911797 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.912200 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.912457 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Jan 22 11:49:01 crc kubenswrapper[5120]: else Jan 22 11:49:01 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 22 11:49:01 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.912870 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.913518 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.914632 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:01 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 22 11:49:01 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:01 crc kubenswrapper[5120]: --disable-webhook \ Jan 22 11:49:01 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 22 11:49:01 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.916280 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.920521 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.923118 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929068 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929127 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929222 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929260 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929331 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-system-cni-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929383 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-binary-copy\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929391 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-system-cni-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929405 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929428 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdqkj\" (UniqueName: \"kubernetes.io/projected/f9f485fd-0793-40a0-abf8-12fd3b612c87-kube-api-access-wdqkj\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929452 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929516 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-socket-dir-parent\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929561 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-multus\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-system-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929607 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-kubelet\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929673 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-multus\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929686 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929678 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929714 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929750 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929766 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-system-cni-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929785 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-kubelet\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929787 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-scbgq\" (UniqueName: \"kubernetes.io/projected/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-kube-api-access-scbgq\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929885 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kndcw\" (UniqueName: \"kubernetes.io/projected/dababdca-8afb-452f-865f-54de3aec21d9-kube-api-access-kndcw\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929923 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929947 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.929999 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930027 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930055 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930078 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cnibin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930110 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-hostroot\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930116 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930175 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7fj\" (UniqueName: \"kubernetes.io/projected/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-kube-api-access-zz7fj\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930203 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930208 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930243 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cnibin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.930265 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930279 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930311 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9f485fd-0793-40a0-abf8-12fd3b612c87-serviceca\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.930330 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:02.430311644 +0000 UTC m=+77.174259985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930432 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930432 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-hostroot\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930527 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-netns\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930570 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930586 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930601 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9f485fd-0793-40a0-abf8-12fd3b612c87-host\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930660 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930665 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930679 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-netns\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930728 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930748 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-bin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930770 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930789 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930807 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-rootfs\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930831 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-k8s-cni-cncf-io\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930847 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-conf-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930857 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-var-lib-cni-bin\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930897 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-k8s-cni-cncf-io\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930916 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-cnibin\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-rootfs\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930939 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-conf-dir\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930864 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-cnibin\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930993 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-os-release\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931012 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931014 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931058 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931080 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-multus-certs\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931096 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931112 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931128 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-mcd-auth-proxy-config\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931144 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931166 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-os-release\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931180 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931363 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931556 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931652 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/97df0621-ddba-4462-8134-59bc671c7351-os-release\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931687 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-binary-copy\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931185 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cs4xp\" (UniqueName: \"kubernetes.io/projected/97df0621-ddba-4462-8134-59bc671c7351-kube-api-access-cs4xp\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931693 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931732 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931727 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-host-run-multus-certs\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931698 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-os-release\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931770 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931810 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-socket-dir-parent\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931856 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931773 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931825 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931949 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.932030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-proxy-tls\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.931979 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-mcd-auth-proxy-config\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.932101 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.932167 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.930785 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9f485fd-0793-40a0-abf8-12fd3b612c87-host\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.933239 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.933755 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934042 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cni-binary-copy\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934083 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-daemon-config\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934121 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-etc-kubernetes\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934165 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934178 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f9f485fd-0793-40a0-abf8-12fd3b612c87-serviceca\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934190 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934268 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-etc-kubernetes\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934338 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934359 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934380 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934394 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934407 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934421 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934436 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934452 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934465 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934477 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934489 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934502 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934514 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934529 5120 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934542 5120 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934558 5120 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934571 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934584 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934595 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934605 5120 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934615 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934627 5120 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934641 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934655 5120 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934667 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934680 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934693 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934706 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934719 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934730 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934738 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934748 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934759 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934768 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934776 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934786 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934796 5120 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934805 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934815 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934824 5120 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934832 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934841 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934850 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934860 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934868 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934878 5120 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934889 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.934898 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935285 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/97df0621-ddba-4462-8134-59bc671c7351-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935349 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-cni-binary-copy\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935488 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-multus-daemon-config\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.935906 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.936242 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.937943 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-proxy-tls\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.944021 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wrdkl" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.948837 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdqkj\" (UniqueName: \"kubernetes.io/projected/f9f485fd-0793-40a0-abf8-12fd3b612c87-kube-api-access-wdqkj\") pod \"node-ca-tf9nb\" (UID: \"f9f485fd-0793-40a0-abf8-12fd3b612c87\") " pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.949175 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-scbgq\" (UniqueName: \"kubernetes.io/projected/90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9-kube-api-access-scbgq\") pod \"machine-config-daemon-dq269\" (UID: \"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\") " pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.950483 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs4xp\" (UniqueName: \"kubernetes.io/projected/97df0621-ddba-4462-8134-59bc671c7351-kube-api-access-cs4xp\") pod \"multus-additional-cni-plugins-rg989\" (UID: \"97df0621-ddba-4462-8134-59bc671c7351\") " pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.953340 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.953428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"ovnkube-node-2mf7v\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.953942 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7fj\" (UniqueName: \"kubernetes.io/projected/67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087-kube-api-access-zz7fj\") pod \"multus-4lzht\" (UID: \"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\") " pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.954034 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"ovnkube-control-plane-57b78d8988-xzh79\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.954184 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rg989" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957358 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndcw\" (UniqueName: \"kubernetes.io/projected/dababdca-8afb-452f-865f-54de3aec21d9-kube-api-access-kndcw\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:01 crc kubenswrapper[5120]: W0122 11:49:01.957605 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaa5719f_fed8_44ac_a759_d2c22d9a2a7f.slice/crio-cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5 WatchSource:0}: Error finding container cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5: Status 404 returned error can't find the container with id cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5 Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957716 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957786 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957803 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.957814 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:01Z","lastTransitionTime":"2026-01-22T11:49:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.964304 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tf9nb" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.964659 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.965805 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:01 crc kubenswrapper[5120]: set -uo pipefail Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 22 11:49:01 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Jan 22 11:49:01 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Jan 22 11:49:01 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 22 11:49:01 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Jan 22 11:49:01 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: while true; do Jan 22 11:49:01 crc kubenswrapper[5120]: declare -A svc_ips Jan 22 11:49:01 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Jan 22 11:49:01 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Jan 22 11:49:01 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 22 11:49:01 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 22 11:49:01 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 22 11:49:01 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:01 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:01 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:01 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 22 11:49:01 crc kubenswrapper[5120]: for i in ${!cmds[*]} Jan 22 11:49:01 crc kubenswrapper[5120]: do Jan 22 11:49:01 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Jan 22 11:49:01 crc kubenswrapper[5120]: break Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Jan 22 11:49:01 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 22 11:49:01 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 22 11:49:01 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 22 11:49:01 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:01 crc kubenswrapper[5120]: continue Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # Append resolver entries for services Jan 22 11:49:01 crc kubenswrapper[5120]: rc=0 Jan 22 11:49:01 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Jan 22 11:49:01 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Jan 22 11:49:01 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:01 crc kubenswrapper[5120]: continue Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: Jan 22 11:49:01 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 22 11:49:01 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Jan 22 11:49:01 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 22 11:49:01 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:01 crc kubenswrapper[5120]: unset svc_ips Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgcrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wrdkl_openshift-dns(eaa5719f-fed8-44ac-a759-d2c22d9a2a7f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.966945 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wrdkl" podUID="eaa5719f-fed8-44ac-a759-d2c22d9a2a7f" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.971594 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.973822 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs4xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rg989_openshift-multus(97df0621-ddba-4462-8134-59bc671c7351): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.975583 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rg989" podUID="97df0621-ddba-4462-8134-59bc671c7351" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.978023 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.978592 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.987332 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4lzht" Jan 22 11:49:01 crc kubenswrapper[5120]: W0122 11:49:01.988162 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f485fd_0793_40a0_abf8_12fd3b612c87.slice/crio-a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb WatchSource:0}: Error finding container a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb: Status 404 returned error can't find the container with id a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb Jan 22 11:49:01 crc kubenswrapper[5120]: I0122 11:49:01.992749 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.993596 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:01 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 22 11:49:01 crc kubenswrapper[5120]: while [ true ]; Jan 22 11:49:01 crc kubenswrapper[5120]: do Jan 22 11:49:01 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Jan 22 11:49:01 crc kubenswrapper[5120]: echo $f Jan 22 11:49:01 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Jan 22 11:49:01 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 22 11:49:01 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 22 11:49:01 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Jan 22 11:49:01 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:01 crc kubenswrapper[5120]: else Jan 22 11:49:01 crc kubenswrapper[5120]: mkdir $reg_dir_path Jan 22 11:49:01 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Jan 22 11:49:01 crc kubenswrapper[5120]: echo $d Jan 22 11:49:01 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 22 11:49:01 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Jan 22 11:49:01 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Jan 22 11:49:01 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Jan 22 11:49:01 crc kubenswrapper[5120]: fi Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: sleep 60 & wait ${!} Jan 22 11:49:01 crc kubenswrapper[5120]: done Jan 22 11:49:01 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdqkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tf9nb_openshift-image-registry(f9f485fd-0793-40a0-abf8-12fd3b612c87): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:01 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:01 crc kubenswrapper[5120]: E0122 11:49:01.995384 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tf9nb" podUID="f9f485fd-0793-40a0-abf8-12fd3b612c87" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.001290 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90c9e0b1_9c25_48fc_8aef_c587b5d6d8e9.slice/crio-89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488 WatchSource:0}: Error finding container 89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488: Status 404 returned error can't find the container with id 89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488 Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.003785 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.004110 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.005008 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd62bdde_a6c1_42b3_9585_ba64c63cbb51.slice/crio-948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d WatchSource:0}: Error finding container 948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d: Status 404 returned error can't find the container with id 948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.008597 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.008807 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 22 11:49:02 crc kubenswrapper[5120]: apiVersion: v1 Jan 22 11:49:02 crc kubenswrapper[5120]: clusters: Jan 22 11:49:02 crc kubenswrapper[5120]: - cluster: Jan 22 11:49:02 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: contexts: Jan 22 11:49:02 crc kubenswrapper[5120]: - context: Jan 22 11:49:02 crc kubenswrapper[5120]: cluster: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: namespace: default Jan 22 11:49:02 crc kubenswrapper[5120]: user: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: current-context: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: kind: Config Jan 22 11:49:02 crc kubenswrapper[5120]: preferences: {} Jan 22 11:49:02 crc kubenswrapper[5120]: users: Jan 22 11:49:02 crc kubenswrapper[5120]: - name: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: user: Jan 22 11:49:02 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: EOF Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdzrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2mf7v_openshift-ovn-kubernetes(dd62bdde-a6c1-42b3-9585-ba64c63cbb51): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.009169 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.010330 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.010332 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.016147 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67eb0b85_4fb2_4c18_a78b_e2eeaa4d2087.slice/crio-082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919 WatchSource:0}: Error finding container 082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919: Status 404 returned error can't find the container with id 082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919 Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.018982 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 22 11:49:02 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 22 11:49:02 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-4lzht_openshift-multus(67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.020840 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-4lzht" podUID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.021080 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: W0122 11:49:02.024005 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdb50da0_eb06_4959_b8da_70919924f77e.slice/crio-20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72 WatchSource:0}: Error finding container 20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72: Status 404 returned error can't find the container with id 20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72 Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.026479 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:02 crc kubenswrapper[5120]: set -euo pipefail Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 22 11:49:02 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Jan 22 11:49:02 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 22 11:49:02 crc kubenswrapper[5120]: TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs(){ Jan 22 11:49:02 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: } Jan 22 11:49:02 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 5 Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Jan 22 11:49:02 crc kubenswrapper[5120]: --logtostderr \ Jan 22 11:49:02 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.029157 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:02 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Jan 22 11:49:02 crc kubenswrapper[5120]: # will rollout control plane pods as well Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Jan 22 11:49:02 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Jan 22 11:49:02 crc kubenswrapper[5120]: else Jan 22 11:49:02 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 22 11:49:02 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:02 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 22 11:49:02 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-pprof \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-ip=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-qos=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-service=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multicast \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.030338 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.032259 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.040470 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.058213 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059386 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059440 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059466 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.059476 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.098038 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.153320 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161681 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161744 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.161777 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.184039 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238572 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238659 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238732 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.238764 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239048 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239077 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239095 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239187 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239159542 +0000 UTC m=+77.983107893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239293 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239310 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239322 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239360 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239347817 +0000 UTC m=+77.983296178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239426 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239466 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239455549 +0000 UTC m=+77.983403900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239534 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.239569 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.239559592 +0000 UTC m=+77.983507953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.243950 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.265795 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.265902 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.265933 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.266063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.266097 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.271381 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.306696 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.339291 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.339578 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.339756 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.339729097 +0000 UTC m=+78.083677448 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368332 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368395 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.368424 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.379926 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.421764 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.441563 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.441764 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.441842 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:03.441821339 +0000 UTC m=+78.185769690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.464541 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471400 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471466 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471480 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.471517 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.503034 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.540783 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574116 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574180 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574221 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.574231 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.577869 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.620242 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.658738 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676689 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676771 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676783 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676806 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.676821 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.704935 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.738725 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.777600 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778496 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778534 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778546 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778562 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.778572 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.816371 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.860587 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880762 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880800 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.880812 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.897259 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.914732 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tf9nb" event={"ID":"f9f485fd-0793-40a0-abf8-12fd3b612c87","Type":"ContainerStarted","Data":"a6e0c823a1210b5b9380e5060667c155023baf8bda5d5ab1e94bc885f2b1e0bb"} Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.916684 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerStarted","Data":"bfb2fa8324043129075f91f76cc2cd600947936a1c269fd1d116dfd187774826"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.916894 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 22 11:49:02 crc kubenswrapper[5120]: while [ true ]; Jan 22 11:49:02 crc kubenswrapper[5120]: do Jan 22 11:49:02 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Jan 22 11:49:02 crc kubenswrapper[5120]: echo $f Jan 22 11:49:02 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Jan 22 11:49:02 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 22 11:49:02 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 22 11:49:02 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: else Jan 22 11:49:02 crc kubenswrapper[5120]: mkdir $reg_dir_path Jan 22 11:49:02 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Jan 22 11:49:02 crc kubenswrapper[5120]: echo $d Jan 22 11:49:02 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 22 11:49:02 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Jan 22 11:49:02 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait ${!} Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdqkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tf9nb_openshift-image-registry(f9f485fd-0793-40a0-abf8-12fd3b612c87): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.917724 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerStarted","Data":"20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.917991 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tf9nb" podUID="f9f485fd-0793-40a0-abf8-12fd3b612c87" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.918647 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs4xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rg989_openshift-multus(97df0621-ddba-4462-8134-59bc671c7351): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.918989 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerStarted","Data":"082b622a176aa05319a4fc66bfbdcdbb3ba81ad686d896f0acc0ae2f995c8919"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.919298 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:02 crc kubenswrapper[5120]: set -euo pipefail Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 22 11:49:02 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 22 11:49:02 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Jan 22 11:49:02 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 22 11:49:02 crc kubenswrapper[5120]: TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs(){ Jan 22 11:49:02 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 22 11:49:02 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: } Jan 22 11:49:02 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 22 11:49:02 crc kubenswrapper[5120]: log_missing_certs Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 5 Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Jan 22 11:49:02 crc kubenswrapper[5120]: --logtostderr \ Jan 22 11:49:02 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 22 11:49:02 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.919869 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rg989" podUID="97df0621-ddba-4462-8134-59bc671c7351" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.920094 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.920947 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 22 11:49:02 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 22 11:49:02 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-4lzht_openshift-multus(67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.921258 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:02 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Jan 22 11:49:02 crc kubenswrapper[5120]: # will rollout control plane pods as well Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Jan 22 11:49:02 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Jan 22 11:49:02 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Jan 22 11:49:02 crc kubenswrapper[5120]: else Jan 22 11:49:02 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 22 11:49:02 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 22 11:49:02 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:02 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 22 11:49:02 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-pprof \ Jan 22 11:49:02 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-ip=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-qos=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-egress-service=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multicast \ Jan 22 11:49:02 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Jan 22 11:49:02 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.921331 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"89844fac781a686f2175b05b0f7c607c93448977e06c70e055b15e62df93a488"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.921717 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 22 11:49:02 crc kubenswrapper[5120]: apiVersion: v1 Jan 22 11:49:02 crc kubenswrapper[5120]: clusters: Jan 22 11:49:02 crc kubenswrapper[5120]: - cluster: Jan 22 11:49:02 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 22 11:49:02 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: contexts: Jan 22 11:49:02 crc kubenswrapper[5120]: - context: Jan 22 11:49:02 crc kubenswrapper[5120]: cluster: default-cluster Jan 22 11:49:02 crc kubenswrapper[5120]: namespace: default Jan 22 11:49:02 crc kubenswrapper[5120]: user: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: name: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: current-context: default-context Jan 22 11:49:02 crc kubenswrapper[5120]: kind: Config Jan 22 11:49:02 crc kubenswrapper[5120]: preferences: {} Jan 22 11:49:02 crc kubenswrapper[5120]: users: Jan 22 11:49:02 crc kubenswrapper[5120]: - name: default-auth Jan 22 11:49:02 crc kubenswrapper[5120]: user: Jan 22 11:49:02 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:02 crc kubenswrapper[5120]: EOF Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdzrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2mf7v_openshift-ovn-kubernetes(dd62bdde-a6c1-42b3-9585-ba64c63cbb51): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.922060 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-4lzht" podUID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.922323 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wrdkl" event={"ID":"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f","Type":"ContainerStarted","Data":"cead94ca34f70bd435c09fd64bff64731b52e59517244bfd77f36dc376930de5"} Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.922329 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.923143 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.923168 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.923727 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:02 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:02 crc kubenswrapper[5120]: set -uo pipefail Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 22 11:49:02 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Jan 22 11:49:02 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Jan 22 11:49:02 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 22 11:49:02 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Jan 22 11:49:02 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: while true; do Jan 22 11:49:02 crc kubenswrapper[5120]: declare -A svc_ips Jan 22 11:49:02 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Jan 22 11:49:02 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Jan 22 11:49:02 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 22 11:49:02 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 22 11:49:02 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 22 11:49:02 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:02 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:02 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:02 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 22 11:49:02 crc kubenswrapper[5120]: for i in ${!cmds[*]} Jan 22 11:49:02 crc kubenswrapper[5120]: do Jan 22 11:49:02 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Jan 22 11:49:02 crc kubenswrapper[5120]: break Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Jan 22 11:49:02 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 22 11:49:02 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 22 11:49:02 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 22 11:49:02 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:02 crc kubenswrapper[5120]: continue Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # Append resolver entries for services Jan 22 11:49:02 crc kubenswrapper[5120]: rc=0 Jan 22 11:49:02 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Jan 22 11:49:02 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Jan 22 11:49:02 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:02 crc kubenswrapper[5120]: continue Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: Jan 22 11:49:02 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 22 11:49:02 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Jan 22 11:49:02 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 22 11:49:02 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 22 11:49:02 crc kubenswrapper[5120]: fi Jan 22 11:49:02 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:02 crc kubenswrapper[5120]: unset svc_ips Jan 22 11:49:02 crc kubenswrapper[5120]: done Jan 22 11:49:02 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgcrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wrdkl_openshift-dns(eaa5719f-fed8-44ac-a759-d2c22d9a2a7f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:02 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.924771 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wrdkl" podUID="eaa5719f-fed8-44ac-a759-d2c22d9a2a7f" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.925190 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:02 crc kubenswrapper[5120]: E0122 11:49:02.926376 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.938460 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.978367 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982720 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982773 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982788 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:02 crc kubenswrapper[5120]: I0122 11:49:02.982826 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:02Z","lastTransitionTime":"2026-01-22T11:49:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.018886 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.065365 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085268 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085336 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085346 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.085370 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.098875 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.139456 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.177729 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187347 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187405 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.187415 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.219517 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.252832 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.252880 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.252938 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.253006 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253091 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253146 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.253131089 +0000 UTC m=+79.997079450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253469 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253552 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.25354024 +0000 UTC m=+79.997488591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253616 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253634 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253648 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253679 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.253669243 +0000 UTC m=+79.997617594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253730 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253741 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253750 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.253778 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.253769075 +0000 UTC m=+79.997717436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.256426 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289822 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.289844 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.302379 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.339278 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.353588 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.353691 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.353671633 +0000 UTC m=+80.097619974 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.378368 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392275 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392379 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.392439 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.416626 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.454596 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.454716 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.454775 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:05.454761659 +0000 UTC m=+80.198710000 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.460188 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495235 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495307 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495329 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.495377 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.498395 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.540781 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571159 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571311 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571338 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571397 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571460 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571546 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.571450 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:03 crc kubenswrapper[5120]: E0122 11:49:03.571867 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.579275 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.580479 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.582616 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.584583 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.586234 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.590719 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.594540 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.596565 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.597550 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.598167 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.599043 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.601795 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.604840 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.607246 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.608157 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.610577 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.612129 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.613163 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.614315 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.616353 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.617278 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.618514 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.619932 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.621692 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.621861 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.622594 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.623454 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.624646 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.625969 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.626783 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.627485 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.629788 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.630342 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.631750 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.632608 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.634406 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.635269 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.636349 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.636916 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.637596 5120 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.638117 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.640676 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.642110 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.643023 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.644292 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.644806 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.646142 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.646807 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.647280 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.648400 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.649497 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.650731 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.651532 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.652751 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.653614 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.655014 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.656171 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.658009 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.659302 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.659581 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.660113 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.660893 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699899 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699947 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699978 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.699995 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.700009 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.700356 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.739984 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.778762 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.801776 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.801984 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.802049 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.802136 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.802215 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.821770 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.858816 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904128 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904178 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904192 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.904217 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:03Z","lastTransitionTime":"2026-01-22T11:49:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.906760 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.949390 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:03 crc kubenswrapper[5120]: I0122 11:49:03.979817 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006162 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006173 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006186 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.006196 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.022163 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.060694 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.103664 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108196 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108236 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108247 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.108273 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.141665 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.186118 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.209927 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210106 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.210129 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.221883 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.263225 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.297273 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312048 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312097 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312106 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312120 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.312131 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.340330 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.376731 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414473 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414518 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414530 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414545 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.414556 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.418947 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516866 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516886 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.516941 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.620467 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.620756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.620994 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.621112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.621217 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724095 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724159 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724183 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.724211 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.826786 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929005 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929236 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929299 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929404 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:04 crc kubenswrapper[5120]: I0122 11:49:04.929495 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:04Z","lastTransitionTime":"2026-01-22T11:49:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031768 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031790 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.031800 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133360 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.133369 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234719 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234818 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234850 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.234862 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274495 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274520 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.274559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274651 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274703 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.274689176 +0000 UTC m=+84.018637517 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274723 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274766 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274796 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274806 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274824 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.274800098 +0000 UTC m=+84.018748439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274900 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.27488227 +0000 UTC m=+84.018830611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274745 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274915 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.274921 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.275002 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.274948592 +0000 UTC m=+84.018896933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338033 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338079 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338091 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338111 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.338123 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.375234 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.375370 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.375347243 +0000 UTC m=+84.119295594 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441015 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441079 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441096 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.441136 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459354 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459415 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459429 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459448 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.459462 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.471044 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474594 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474638 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474667 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.474681 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.476237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.476353 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.476408 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:09.476388989 +0000 UTC m=+84.220337340 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.486718 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490208 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490250 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.490273 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.506310 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510508 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510529 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.510539 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.524485 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529327 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529420 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529450 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529490 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.529521 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.542863 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.543171 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545620 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545682 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545703 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545730 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.545752 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571203 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571392 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571211 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571427 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.571203 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571535 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571683 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:05 crc kubenswrapper[5120]: E0122 11:49:05.571798 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.586470 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.596750 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.609670 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.632227 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.644913 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648137 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648150 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648164 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.648172 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.655169 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.663752 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.677803 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.690420 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.713183 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.731030 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.744765 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.749839 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750159 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750247 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.750399 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.754450 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.765303 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.773217 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.781493 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.792526 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.801380 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.809148 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854157 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854267 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854305 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.854327 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958552 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958610 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958656 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958684 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:05 crc kubenswrapper[5120]: I0122 11:49:05.958699 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:05Z","lastTransitionTime":"2026-01-22T11:49:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061257 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061277 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.061290 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164098 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164256 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164281 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164311 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.164333 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267440 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267548 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267580 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267620 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.267649 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371433 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.371475 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474324 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474487 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474525 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.474549 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578217 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578231 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578250 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.578262 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681350 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681480 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.681495 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783269 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783318 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783333 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.783364 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886764 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886859 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886880 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.886936 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989531 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989585 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989596 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989614 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:06 crc kubenswrapper[5120]: I0122 11:49:06.989625 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:06Z","lastTransitionTime":"2026-01-22T11:49:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092344 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092425 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092446 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.092461 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195588 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195725 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.195754 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299403 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299495 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299563 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.299630 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402388 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402465 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.402500 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505810 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505843 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.505892 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571034 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571049 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.571298 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571297 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.571578 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.571849 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.572013 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:07 crc kubenswrapper[5120]: E0122 11:49:07.572121 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608902 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.608920 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711451 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711531 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711551 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.711564 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814541 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814624 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814653 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814688 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.814714 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918381 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:07 crc kubenswrapper[5120]: I0122 11:49:07.918450 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:07Z","lastTransitionTime":"2026-01-22T11:49:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.020923 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021067 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021122 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.021150 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.123949 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124040 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.124075 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226815 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226901 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226928 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.226999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.227027 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330229 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330251 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.330307 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433781 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433866 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433887 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.433931 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536772 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536814 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536826 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.536851 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.639948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640014 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640025 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640042 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.640053 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742242 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742302 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742314 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.742351 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846126 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846174 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846184 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846201 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.846215 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.906844 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.948949 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949040 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949056 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949081 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:08 crc kubenswrapper[5120]: I0122 11:49:08.949101 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:08Z","lastTransitionTime":"2026-01-22T11:49:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053127 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053180 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053192 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053209 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.053222 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.155394 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258410 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258424 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258446 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.258461 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328326 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328372 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328399 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.328421 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328479 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328480 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328529 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328515793 +0000 UTC m=+92.072464134 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328527 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328541 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328535853 +0000 UTC m=+92.072484194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328550 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328560 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328590 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328581304 +0000 UTC m=+92.072529645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328767 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328823 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328838 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.328946 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.328919243 +0000 UTC m=+92.072867584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361684 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361712 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361747 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.361801 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.429922 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.430271 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.430212815 +0000 UTC m=+92.174161166 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.432771 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464377 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.464387 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.531615 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.531940 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.532111 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:17.532074111 +0000 UTC m=+92.276022472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566739 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566800 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566833 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566857 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.566875 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.571373 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.571416 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.571424 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.571554 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.571931 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.572085 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.572173 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:09 crc kubenswrapper[5120]: E0122 11:49:09.572402 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669838 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669903 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669922 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.669935 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772617 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772708 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772736 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772768 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.772792 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877496 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877621 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877694 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.877739 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982605 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982633 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982665 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:09 crc kubenswrapper[5120]: I0122 11:49:09.982685 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:09Z","lastTransitionTime":"2026-01-22T11:49:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085167 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085240 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085260 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.085272 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188813 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.188935 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.291911 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292038 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292059 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292086 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.292109 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395134 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395155 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395182 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.395199 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498619 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498638 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498666 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.498685 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601807 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601896 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601923 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.601992 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.602052 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705611 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705734 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705755 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.705802 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.808879 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809025 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809069 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809202 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.809241 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912403 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:10 crc kubenswrapper[5120]: I0122 11:49:10.912450 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:10Z","lastTransitionTime":"2026-01-22T11:49:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015687 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015783 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015806 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.015822 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.118854 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.118926 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.118946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.119001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.119021 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222742 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222854 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.222892 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325787 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325802 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.325814 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429391 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429487 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429541 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.429565 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533095 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533203 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533224 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533257 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.533278 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571330 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571420 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571351 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.571605 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.571784 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.572026 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.572282 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:11 crc kubenswrapper[5120]: E0122 11:49:11.571750 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635670 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635726 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.635765 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739259 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739357 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739379 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.739441 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.842892 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.842967 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.842980 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.843001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.843018 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945588 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:11 crc kubenswrapper[5120]: I0122 11:49:11.945792 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:11Z","lastTransitionTime":"2026-01-22T11:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049130 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049209 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.049237 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151442 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151465 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.151478 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254521 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254551 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254587 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.254611 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357140 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357286 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357336 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.357359 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461167 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461256 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461280 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.461332 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.564704 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565029 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565101 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.565111 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.667859 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.667943 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.668002 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.668048 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.668069 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770394 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770549 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.770575 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872629 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872639 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.872661 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.974861 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.974946 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.975011 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.975042 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:12 crc kubenswrapper[5120]: I0122 11:49:12.975075 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:12Z","lastTransitionTime":"2026-01-22T11:49:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077838 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.077894 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180467 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180532 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.180547 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283675 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283770 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283795 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.283872 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.386811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.386905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.386932 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.387021 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.387066 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490148 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490234 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490256 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490282 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.490300 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.571392 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.571455 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.571595 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.571895 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.572016 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.572196 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.572240 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.572294 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.574338 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:13 crc kubenswrapper[5120]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:13 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:13 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:13 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: fi Jan 22 11:49:13 crc kubenswrapper[5120]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 22 11:49:13 crc kubenswrapper[5120]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 22 11:49:13 crc kubenswrapper[5120]: ho_enable="--enable-hybrid-overlay" Jan 22 11:49:13 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 22 11:49:13 crc kubenswrapper[5120]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 22 11:49:13 crc kubenswrapper[5120]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 22 11:49:13 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:13 crc kubenswrapper[5120]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --webhook-host=127.0.0.1 \ Jan 22 11:49:13 crc kubenswrapper[5120]: --webhook-port=9743 \ Jan 22 11:49:13 crc kubenswrapper[5120]: ${ho_enable} \ Jan 22 11:49:13 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:13 crc kubenswrapper[5120]: --disable-approver \ Jan 22 11:49:13 crc kubenswrapper[5120]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --wait-for-kubernetes-api=200s \ Jan 22 11:49:13 crc kubenswrapper[5120]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:13 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:13 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.574453 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.577293 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scbgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.578064 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:13 crc kubenswrapper[5120]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:13 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:13 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:13 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:13 crc kubenswrapper[5120]: fi Jan 22 11:49:13 crc kubenswrapper[5120]: Jan 22 11:49:13 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 22 11:49:13 crc kubenswrapper[5120]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 11:49:13 crc kubenswrapper[5120]: --disable-webhook \ Jan 22 11:49:13 crc kubenswrapper[5120]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 22 11:49:13 crc kubenswrapper[5120]: --loglevel="${LOGLEVEL}" Jan 22 11:49:13 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:13 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.578436 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 11:49:13 crc kubenswrapper[5120]: E0122 11:49:13.579266 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592423 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592491 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.592568 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694659 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694744 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694758 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694801 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.694817 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.797941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798084 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.798133 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900850 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900903 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:13 crc kubenswrapper[5120]: I0122 11:49:13.900925 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:13Z","lastTransitionTime":"2026-01-22T11:49:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004136 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004160 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.004224 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109680 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109805 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109833 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109876 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.109912 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213102 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213220 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213248 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.213268 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315331 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315391 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315404 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.315433 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418329 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418580 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418601 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418626 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.418646 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520663 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520725 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520770 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.520787 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.623509 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725708 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725767 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725784 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725802 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.725812 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827607 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827652 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827663 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827677 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.827722 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.929919 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.929980 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.929989 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.930001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:14 crc kubenswrapper[5120]: I0122 11:49:14.930010 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:14Z","lastTransitionTime":"2026-01-22T11:49:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032401 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032528 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032573 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.032601 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135178 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135224 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135238 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.135248 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237827 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237867 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237876 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.237899 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340328 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340357 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.340377 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442407 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442416 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442433 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.442445 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545264 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545289 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.545337 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571071 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571083 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571310 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571766 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571794 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.571776 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571898 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.571942 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.573389 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:15 crc kubenswrapper[5120]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Jan 22 11:49:15 crc kubenswrapper[5120]: while [ true ]; Jan 22 11:49:15 crc kubenswrapper[5120]: do Jan 22 11:49:15 crc kubenswrapper[5120]: for f in $(ls /tmp/serviceca); do Jan 22 11:49:15 crc kubenswrapper[5120]: echo $f Jan 22 11:49:15 crc kubenswrapper[5120]: ca_file_path="/tmp/serviceca/${f}" Jan 22 11:49:15 crc kubenswrapper[5120]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Jan 22 11:49:15 crc kubenswrapper[5120]: reg_dir_path="/etc/docker/certs.d/${f}" Jan 22 11:49:15 crc kubenswrapper[5120]: if [ -e "${reg_dir_path}" ]; then Jan 22 11:49:15 crc kubenswrapper[5120]: cp -u $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:15 crc kubenswrapper[5120]: else Jan 22 11:49:15 crc kubenswrapper[5120]: mkdir $reg_dir_path Jan 22 11:49:15 crc kubenswrapper[5120]: cp $ca_file_path $reg_dir_path/ca.crt Jan 22 11:49:15 crc kubenswrapper[5120]: fi Jan 22 11:49:15 crc kubenswrapper[5120]: done Jan 22 11:49:15 crc kubenswrapper[5120]: for d in $(ls /etc/docker/certs.d); do Jan 22 11:49:15 crc kubenswrapper[5120]: echo $d Jan 22 11:49:15 crc kubenswrapper[5120]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Jan 22 11:49:15 crc kubenswrapper[5120]: reg_conf_path="/tmp/serviceca/${dp}" Jan 22 11:49:15 crc kubenswrapper[5120]: if [ ! -e "${reg_conf_path}" ]; then Jan 22 11:49:15 crc kubenswrapper[5120]: rm -rf /etc/docker/certs.d/$d Jan 22 11:49:15 crc kubenswrapper[5120]: fi Jan 22 11:49:15 crc kubenswrapper[5120]: done Jan 22 11:49:15 crc kubenswrapper[5120]: sleep 60 & wait ${!} Jan 22 11:49:15 crc kubenswrapper[5120]: done Jan 22 11:49:15 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdqkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-tf9nb_openshift-image-registry(f9f485fd-0793-40a0-abf8-12fd3b612c87): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:15 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.574097 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cs4xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rg989_openshift-multus(97df0621-ddba-4462-8134-59bc671c7351): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.574747 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-tf9nb" podUID="f9f485fd-0793-40a0-abf8-12fd3b612c87" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.575717 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rg989" podUID="97df0621-ddba-4462-8134-59bc671c7351" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.591614 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.606059 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.615769 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.623515 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.636496 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.644207 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647567 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647671 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647693 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.647741 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.654635 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.668741 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.682010 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.701442 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.715329 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.729939 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.742895 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749604 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749685 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749707 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.749763 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.765797 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.779054 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.796267 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.806203 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.817027 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.827025 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832041 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832087 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832101 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.832130 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.844810 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848072 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848088 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848107 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.848120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.857812 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861531 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.861543 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.870154 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873305 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873355 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873368 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873386 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.873397 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.882469 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.888832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.888991 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.889027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.889068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.889099 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.902514 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:15 crc kubenswrapper[5120]: E0122 11:49:15.902653 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904471 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904525 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904543 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:15 crc kubenswrapper[5120]: I0122 11:49:15.904557 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:15Z","lastTransitionTime":"2026-01-22T11:49:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.006899 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007005 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007026 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.007068 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109228 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109296 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109316 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.109360 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211283 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211404 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211495 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.211524 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314347 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314360 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314379 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.314394 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416491 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.416536 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518792 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518837 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518861 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.518870 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.572370 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.572538 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.572939 5120 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.573499 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:16 crc kubenswrapper[5120]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Jan 22 11:49:16 crc kubenswrapper[5120]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Jan 22 11:49:16 crc kubenswrapper[5120]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zz7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-4lzht_openshift-multus(67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:16 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.573760 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:16 crc kubenswrapper[5120]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Jan 22 11:49:16 crc kubenswrapper[5120]: apiVersion: v1 Jan 22 11:49:16 crc kubenswrapper[5120]: clusters: Jan 22 11:49:16 crc kubenswrapper[5120]: - cluster: Jan 22 11:49:16 crc kubenswrapper[5120]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Jan 22 11:49:16 crc kubenswrapper[5120]: server: https://api-int.crc.testing:6443 Jan 22 11:49:16 crc kubenswrapper[5120]: name: default-cluster Jan 22 11:49:16 crc kubenswrapper[5120]: contexts: Jan 22 11:49:16 crc kubenswrapper[5120]: - context: Jan 22 11:49:16 crc kubenswrapper[5120]: cluster: default-cluster Jan 22 11:49:16 crc kubenswrapper[5120]: namespace: default Jan 22 11:49:16 crc kubenswrapper[5120]: user: default-auth Jan 22 11:49:16 crc kubenswrapper[5120]: name: default-context Jan 22 11:49:16 crc kubenswrapper[5120]: current-context: default-context Jan 22 11:49:16 crc kubenswrapper[5120]: kind: Config Jan 22 11:49:16 crc kubenswrapper[5120]: preferences: {} Jan 22 11:49:16 crc kubenswrapper[5120]: users: Jan 22 11:49:16 crc kubenswrapper[5120]: - name: default-auth Jan 22 11:49:16 crc kubenswrapper[5120]: user: Jan 22 11:49:16 crc kubenswrapper[5120]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:16 crc kubenswrapper[5120]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Jan 22 11:49:16 crc kubenswrapper[5120]: EOF Jan 22 11:49:16 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdzrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-2mf7v_openshift-ovn-kubernetes(dd62bdde-a6c1-42b3-9585-ba64c63cbb51): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:16 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.573966 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:16 crc kubenswrapper[5120]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:16 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:16 crc kubenswrapper[5120]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 22 11:49:16 crc kubenswrapper[5120]: source /etc/kubernetes/apiserver-url.env Jan 22 11:49:16 crc kubenswrapper[5120]: else Jan 22 11:49:16 crc kubenswrapper[5120]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 22 11:49:16 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:16 crc kubenswrapper[5120]: fi Jan 22 11:49:16 crc kubenswrapper[5120]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 22 11:49:16 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:16 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575009 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575040 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575049 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-4lzht" podUID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" Jan 22 11:49:16 crc kubenswrapper[5120]: E0122 11:49:16.575105 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620861 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620874 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620891 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.620905 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723247 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723323 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723345 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.723362 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825406 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825497 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.825509 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928585 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928647 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:16 crc kubenswrapper[5120]: I0122 11:49:16.928690 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:16Z","lastTransitionTime":"2026-01-22T11:49:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031313 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031356 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031380 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.031391 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133534 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133600 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133617 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133637 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.133650 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.235977 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236058 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.236082 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334451 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334563 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.334540643 +0000 UTC m=+108.078489034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334458 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334590 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334608 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334618 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.334652 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334695 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.334679486 +0000 UTC m=+108.078627827 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334706 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.334737 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.334726368 +0000 UTC m=+108.078674709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335073 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335156 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335221 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.335326 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.335311062 +0000 UTC m=+108.079259403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338271 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338336 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.338346 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.436312 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.436543 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.436510822 +0000 UTC m=+108.180459163 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440543 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440623 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440642 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440670 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.440691 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.538794 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.538999 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.539086 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:33.539064665 +0000 UTC m=+108.283013026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542807 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542838 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.542848 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.571750 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.571999 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.572071 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.572161 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.572167 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.572186 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.572389 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.572641 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.573425 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:17 crc kubenswrapper[5120]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:17 crc kubenswrapper[5120]: set -euo pipefail Jan 22 11:49:17 crc kubenswrapper[5120]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Jan 22 11:49:17 crc kubenswrapper[5120]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Jan 22 11:49:17 crc kubenswrapper[5120]: # As the secret mount is optional we must wait for the files to be present. Jan 22 11:49:17 crc kubenswrapper[5120]: # The service is created in monitor.yaml and this is created in sdn.yaml. Jan 22 11:49:17 crc kubenswrapper[5120]: TS=$(date +%s) Jan 22 11:49:17 crc kubenswrapper[5120]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Jan 22 11:49:17 crc kubenswrapper[5120]: HAS_LOGGED_INFO=0 Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: log_missing_certs(){ Jan 22 11:49:17 crc kubenswrapper[5120]: CUR_TS=$(date +%s) Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Jan 22 11:49:17 crc kubenswrapper[5120]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Jan 22 11:49:17 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Jan 22 11:49:17 crc kubenswrapper[5120]: HAS_LOGGED_INFO=1 Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: } Jan 22 11:49:17 crc kubenswrapper[5120]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Jan 22 11:49:17 crc kubenswrapper[5120]: log_missing_certs Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 5 Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Jan 22 11:49:17 crc kubenswrapper[5120]: exec /usr/bin/kube-rbac-proxy \ Jan 22 11:49:17 crc kubenswrapper[5120]: --logtostderr \ Jan 22 11:49:17 crc kubenswrapper[5120]: --secure-listen-address=:9108 \ Jan 22 11:49:17 crc kubenswrapper[5120]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Jan 22 11:49:17 crc kubenswrapper[5120]: --upstream=http://127.0.0.1:29108/ \ Jan 22 11:49:17 crc kubenswrapper[5120]: --tls-private-key-file=${TLS_PK} \ Jan 22 11:49:17 crc kubenswrapper[5120]: --tls-cert-file=${TLS_CERT} Jan 22 11:49:17 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:17 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.573627 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:17 crc kubenswrapper[5120]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Jan 22 11:49:17 crc kubenswrapper[5120]: set -uo pipefail Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 22 11:49:17 crc kubenswrapper[5120]: HOSTS_FILE="/etc/hosts" Jan 22 11:49:17 crc kubenswrapper[5120]: TEMP_FILE="/tmp/hosts.tmp" Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Make a temporary file with the old hosts file's attributes. Jan 22 11:49:17 crc kubenswrapper[5120]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 22 11:49:17 crc kubenswrapper[5120]: echo "Failed to preserve hosts file. Exiting." Jan 22 11:49:17 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: while true; do Jan 22 11:49:17 crc kubenswrapper[5120]: declare -A svc_ips Jan 22 11:49:17 crc kubenswrapper[5120]: for svc in "${services[@]}"; do Jan 22 11:49:17 crc kubenswrapper[5120]: # Fetch service IP from cluster dns if present. We make several tries Jan 22 11:49:17 crc kubenswrapper[5120]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 22 11:49:17 crc kubenswrapper[5120]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 22 11:49:17 crc kubenswrapper[5120]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 22 11:49:17 crc kubenswrapper[5120]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:17 crc kubenswrapper[5120]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:17 crc kubenswrapper[5120]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 22 11:49:17 crc kubenswrapper[5120]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 22 11:49:17 crc kubenswrapper[5120]: for i in ${!cmds[*]} Jan 22 11:49:17 crc kubenswrapper[5120]: do Jan 22 11:49:17 crc kubenswrapper[5120]: ips=($(eval "${cmds[i]}")) Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: svc_ips["${svc}"]="${ips[@]}" Jan 22 11:49:17 crc kubenswrapper[5120]: break Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Update /etc/hosts only if we get valid service IPs Jan 22 11:49:17 crc kubenswrapper[5120]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 22 11:49:17 crc kubenswrapper[5120]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 22 11:49:17 crc kubenswrapper[5120]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 22 11:49:17 crc kubenswrapper[5120]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:17 crc kubenswrapper[5120]: continue Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Append resolver entries for services Jan 22 11:49:17 crc kubenswrapper[5120]: rc=0 Jan 22 11:49:17 crc kubenswrapper[5120]: for svc in "${!svc_ips[@]}"; do Jan 22 11:49:17 crc kubenswrapper[5120]: for ip in ${svc_ips[${svc}]}; do Jan 22 11:49:17 crc kubenswrapper[5120]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ $rc -ne 0 ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:17 crc kubenswrapper[5120]: continue Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 22 11:49:17 crc kubenswrapper[5120]: # Replace /etc/hosts with our modified version if needed Jan 22 11:49:17 crc kubenswrapper[5120]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 22 11:49:17 crc kubenswrapper[5120]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: sleep 60 & wait Jan 22 11:49:17 crc kubenswrapper[5120]: unset svc_ips Jan 22 11:49:17 crc kubenswrapper[5120]: done Jan 22 11:49:17 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgcrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wrdkl_openshift-dns(eaa5719f-fed8-44ac-a759-d2c22d9a2a7f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:17 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.574801 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wrdkl" podUID="eaa5719f-fed8-44ac-a759-d2c22d9a2a7f" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.576606 5120 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jan 22 11:49:17 crc kubenswrapper[5120]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ -f "/env/_master" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: set -o allexport Jan 22 11:49:17 crc kubenswrapper[5120]: source "/env/_master" Jan 22 11:49:17 crc kubenswrapper[5120]: set +o allexport Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "" != "" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: persistent_ips_enabled_flag="--enable-persistent-ips" Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # This is needed so that converting clusters from GA to TP Jan 22 11:49:17 crc kubenswrapper[5120]: # will rollout control plane pods as well Jan 22 11:49:17 crc kubenswrapper[5120]: network_segmentation_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" != "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_enabled_flag="--enable-multi-network" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: network_segmentation_enabled_flag="--enable-network-segmentation" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: route_advertisements_enable_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: route_advertisements_enable_flag="--enable-route-advertisements" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Enable multi-network policy if configured (control-plane always full mode) Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_policy_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "false" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: # Enable admin network policy if configured (control-plane always full mode) Jan 22 11:49:17 crc kubenswrapper[5120]: admin_network_policy_enabled_flag= Jan 22 11:49:17 crc kubenswrapper[5120]: if [[ "true" == "true" ]]; then Jan 22 11:49:17 crc kubenswrapper[5120]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: if [ "shared" == "shared" ]; then Jan 22 11:49:17 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode shared" Jan 22 11:49:17 crc kubenswrapper[5120]: elif [ "shared" == "local" ]; then Jan 22 11:49:17 crc kubenswrapper[5120]: gateway_mode_flags="--gateway-mode local" Jan 22 11:49:17 crc kubenswrapper[5120]: else Jan 22 11:49:17 crc kubenswrapper[5120]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Jan 22 11:49:17 crc kubenswrapper[5120]: exit 1 Jan 22 11:49:17 crc kubenswrapper[5120]: fi Jan 22 11:49:17 crc kubenswrapper[5120]: Jan 22 11:49:17 crc kubenswrapper[5120]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Jan 22 11:49:17 crc kubenswrapper[5120]: exec /usr/bin/ovnkube \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-interconnect \ Jan 22 11:49:17 crc kubenswrapper[5120]: --init-cluster-manager "${K8S_NODE}" \ Jan 22 11:49:17 crc kubenswrapper[5120]: --config-file=/run/ovnkube-config/ovnkube.conf \ Jan 22 11:49:17 crc kubenswrapper[5120]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Jan 22 11:49:17 crc kubenswrapper[5120]: --metrics-bind-address "127.0.0.1:29108" \ Jan 22 11:49:17 crc kubenswrapper[5120]: --metrics-enable-pprof \ Jan 22 11:49:17 crc kubenswrapper[5120]: --metrics-enable-config-duration \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v4_join_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v6_join_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v4_transit_switch_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${ovn_v6_transit_switch_subnet_opt} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${dns_name_resolver_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${persistent_ips_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${multi_network_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${network_segmentation_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${gateway_mode_flags} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${route_advertisements_enable_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${preconfigured_udn_addresses_enable_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-ip=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-firewall=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-qos=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-egress-service=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-multicast \ Jan 22 11:49:17 crc kubenswrapper[5120]: --enable-multi-external-gateway=true \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${multi_network_policy_enabled_flag} \ Jan 22 11:49:17 crc kubenswrapper[5120]: ${admin_network_policy_enabled_flag} Jan 22 11:49:17 crc kubenswrapper[5120]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lt4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-xzh79_openshift-ovn-kubernetes(cdb50da0-eb06-4959-b8da-70919924f77e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 11:49:17 crc kubenswrapper[5120]: > logger="UnhandledError" Jan 22 11:49:17 crc kubenswrapper[5120]: E0122 11:49:17.577740 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.645898 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.645941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.645985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.646010 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.646025 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748334 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748353 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.748366 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.851871 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852011 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852127 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.852165 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955207 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955260 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955276 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955294 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:17 crc kubenswrapper[5120]: I0122 11:49:17.955305 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:17Z","lastTransitionTime":"2026-01-22T11:49:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057565 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057647 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057671 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057699 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.057723 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161072 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161156 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161176 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.161190 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263382 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263430 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263440 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263453 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.263463 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366382 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366485 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366513 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366546 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.366570 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469254 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469307 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469319 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469342 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.469368 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571723 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571815 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571870 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571899 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.571925 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675409 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675468 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675505 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.675517 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777090 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777131 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777142 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777157 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.777167 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879330 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879381 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879395 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.879424 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981800 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981814 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:18 crc kubenswrapper[5120]: I0122 11:49:18.981844 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:18Z","lastTransitionTime":"2026-01-22T11:49:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084157 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084207 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084217 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084232 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.084244 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186564 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186604 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.186620 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288660 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288701 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288711 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288724 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.288734 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390748 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390825 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390837 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.390860 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492674 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492740 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492759 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.492771 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571366 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571408 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.571510 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.571609 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571705 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.571730 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.571838 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:19 crc kubenswrapper[5120]: E0122 11:49:19.572005 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594677 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594740 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.594764 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696435 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696471 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696494 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.696504 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797851 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797898 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797913 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.797923 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900538 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900550 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900569 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:19 crc kubenswrapper[5120]: I0122 11:49:19.900581 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:19Z","lastTransitionTime":"2026-01-22T11:49:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003255 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003338 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.003347 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.045005 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104714 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104766 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104782 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.104792 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207509 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207603 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207621 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207645 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.207659 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.309926 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310024 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310039 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.310048 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411601 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411646 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411655 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411669 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.411679 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513161 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513211 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513223 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513238 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.513250 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615532 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615575 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615584 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615598 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.615613 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718012 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718055 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718093 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718109 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.718120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820411 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820470 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820482 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.820512 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922510 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922551 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922560 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922574 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:20 crc kubenswrapper[5120]: I0122 11:49:20.922584 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:20Z","lastTransitionTime":"2026-01-22T11:49:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024474 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024483 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.024508 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127043 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127083 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127093 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127106 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.127116 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229049 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229121 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229138 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229156 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.229167 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331643 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331847 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331883 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331909 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.331992 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433859 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433890 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.433902 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536418 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536776 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536889 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.536974 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572061 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.572456 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572149 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.572675 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572129 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.572909 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.572208 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:21 crc kubenswrapper[5120]: E0122 11:49:21.573336 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639304 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639341 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.639365 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742162 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742209 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742219 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742236 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.742245 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844604 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844616 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844631 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.844697 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.946999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947050 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947066 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:21 crc kubenswrapper[5120]: I0122 11:49:21.947097 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:21Z","lastTransitionTime":"2026-01-22T11:49:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049058 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049128 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049144 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.049154 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150918 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150947 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.150977 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253562 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253595 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.253609 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355858 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355924 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355942 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.355969 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458528 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458588 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458600 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458622 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.458636 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.560917 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.560980 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.560990 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.561005 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.561014 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.663974 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664058 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664068 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664085 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.664098 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766541 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766605 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766619 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766636 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.766648 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.868910 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.868999 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.869018 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.869037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.869051 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971396 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971458 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:22 crc kubenswrapper[5120]: I0122 11:49:22.971490 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:22Z","lastTransitionTime":"2026-01-22T11:49:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073128 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073177 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073207 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.073219 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175043 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175091 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175105 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175121 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.175132 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277196 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277263 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277278 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.277289 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.378988 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379083 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.379094 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.480887 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481060 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481089 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.481111 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.571543 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.571566 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.571677 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.571555 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.571942 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.572023 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.572086 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:23 crc kubenswrapper[5120]: E0122 11:49:23.572225 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582778 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582816 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.582838 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685326 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685354 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685384 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.685407 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787385 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787457 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787472 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.787485 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890139 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890193 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890203 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890218 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.890228 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.991913 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.991981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.991992 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.992006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:23 crc kubenswrapper[5120]: I0122 11:49:23.992015 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:23Z","lastTransitionTime":"2026-01-22T11:49:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094137 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094200 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.094224 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.195906 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.195973 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.195985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.196001 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.196012 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298189 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298198 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298212 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.298221 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400131 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400183 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400197 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.400223 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.502998 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503065 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503080 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.503091 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605655 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605714 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605728 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605748 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.605761 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710339 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710825 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710846 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.710858 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.813750 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.813884 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.814030 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.814123 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.814159 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917230 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917283 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917293 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917309 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.917332 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:24Z","lastTransitionTime":"2026-01-22T11:49:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.984576 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"57cccfe61e8f332b9a2398e2ca5f128b7473e871fd825bfdbb35d9ba91022b81"} Jan 22 11:49:24 crc kubenswrapper[5120]: I0122 11:49:24.984662 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.006326 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022173 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022611 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022650 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022662 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022679 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.022692 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.038438 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.058760 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.072190 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.084571 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://57cccfe61e8f332b9a2398e2ca5f128b7473e871fd825bfdbb35d9ba91022b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.095634 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.110046 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.120447 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124526 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124594 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124612 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.124623 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.138550 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.153831 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.167619 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.176146 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.187247 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.196000 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.205425 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.216844 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227377 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227437 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227468 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.227480 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.230671 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.242044 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329476 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329931 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329943 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329985 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.329999 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433397 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433488 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433508 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.433556 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536151 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536205 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536231 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.536245 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.571803 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.571834 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.571988 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.571987 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.572150 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.572169 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.572262 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:25 crc kubenswrapper[5120]: E0122 11:49:25.572360 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.582616 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.598423 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdzrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2mf7v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.609884 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.621026 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.629675 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638692 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638738 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638752 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.638761 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.639917 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.648100 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.658867 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.667810 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.676294 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.682805 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wrdkl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dgcrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wrdkl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.693325 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97df0621-ddba-4462-8134-59bc671c7351\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs4xp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rg989\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.702491 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-4lzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zz7fj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4lzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.714690 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc6361ac-72d0-485c-938e-c58010f57d78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4b2fc2ec264e1a2f47ef48ae3682ece70e9bcb0c27191badb3dbb25d763d6ed6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d6d0b4ca0fcc7c60a642256079a5ccee5482c56dd372189b46a95401451fa45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d115df90471eae10a65aefb390195da3593e903d0ad1a730847db2d29a63cc7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.739512 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39c0a299-bb61-4f5d-8177-544cd4abe1ad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://032f1b1cf07b4a93c23326f05479f43fba3a3cf6bb4b9f6c3ae29a76050edfe5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://4985527bf2ab9cc933f70f9ea2994a77482f8a24299c8efc8321a3fd5d86a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://209c9652e04417a0d9d549aa169eae5834fadfd0f9dca2eb8620fc81f999192a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7276e56b446c98c69bd713b22bf844b5cae42b8a0d8da7b8fb151efc140381ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://36ea9f6809070fa9f7f4b7e5c40fae1648814d3b300a273a28c80ea6035f76a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e9a1ca4941ec2eb248aac427dc7bbbb75c43b4680680c221c5eaf186b5986b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028713bf3e9d1dc75729378d49c58defe47bb7fc8dadd99d93e91304cec6cf84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3b9f5c7630e7e80fee0c6bceb378b3069a777f25552b1f309325e0a12134ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741359 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741424 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741436 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.741462 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.752855 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.764193 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://57cccfe61e8f332b9a2398e2ca5f128b7473e871fd825bfdbb35d9ba91022b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:49:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-scbgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dq269\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.772850 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dababdca-8afb-452f-865f-54de3aec21d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kndcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ldwx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.783855 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843486 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843500 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843516 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.843527 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945516 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945527 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945540 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:25 crc kubenswrapper[5120]: I0122 11:49:25.945550 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:25Z","lastTransitionTime":"2026-01-22T11:49:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048406 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048470 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048482 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.048518 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152073 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152152 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152171 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152200 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.152221 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252171 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252206 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.252222 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.268596 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273418 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273451 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273460 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273473 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.273482 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.288531 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292819 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292868 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.292880 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.304661 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309222 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309300 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309359 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.309408 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.322263 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329112 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329154 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329170 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329188 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.329202 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.341173 5120 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"60403ab6-2e1e-4736-9a34-cfc7e1924d0b\\\",\\\"systemUUID\\\":\\\"382cdad4-0171-4b64-8e1b-b8f3f02e6a19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:26 crc kubenswrapper[5120]: E0122 11:49:26.341350 5120 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342651 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342698 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342709 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342727 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.342738 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445779 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445840 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445855 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445876 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.445892 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.547641 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548031 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548061 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.548073 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650331 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650433 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650475 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.650488 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752343 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752357 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752374 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.752390 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859514 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859572 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859592 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.859603 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961647 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961702 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961746 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.961763 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:26Z","lastTransitionTime":"2026-01-22T11:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.993705 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"98adc11275c61fcc437ee7afbd57096d086ee979acd0013b5c59c635048f3ac3"} Jan 22 11:49:26 crc kubenswrapper[5120]: I0122 11:49:26.993773 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"c84e5a6ed25fd1100d4cbdf237cc499dbd601f84526ab419d876a0dce61d0501"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.005023 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"410ef417-8c38-4aac-9a75-c1a938b0cf8c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T11:48:52Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0122 11:48:51.105406 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 11:48:51.105599 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0122 11:48:51.106804 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1158037108/tls.crt::/tmp/serving-cert-1158037108/tls.key\\\\\\\"\\\\nI0122 11:48:52.103234 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 11:48:52.104987 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 11:48:52.105003 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 11:48:52.105030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 11:48:52.105035 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 11:48:52.112491 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 11:48:52.112515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112520 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 11:48:52.112524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 11:48:52.112528 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 11:48:52.112531 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 11:48:52.112534 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 11:48:52.112540 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 11:48:52.115022 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T11:48:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.014569 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4822d3cd-955f-493d-a818-acebb52b3602\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:48:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caf1ed97ccb35c8ce9c3321194645452c5875bdadb4b2634d00114c1cedc1056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://91363fceef321ca9f1495cd188f848fae974f94b1b5732adbab842efc578074c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad731d2d8530eae95dec603d9f7a060ea885c926d453b983464949e2eb4fc2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3ad48ffe8f14cdb9c09a6ed7b7da5d4db116a1dac0653103da063524734f466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.022636 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7027ae84-efaa-474d-9221-28d77dc0af15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:47:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fa31f4d5e4e6f36d31ea882d29804b21ad3c620e6f31cf12aec3085ed0f9f9b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f232b2402a84370f16fcd5fe49fb57391d5d49d1df96442b937914a9ad6ad54\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T11:47:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T11:47:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:47:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.032308 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.041585 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tf9nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9f485fd-0793-40a0-abf8-12fd3b612c87\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdqkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tf9nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.051585 5120 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdb50da0-eb06-4959-b8da-70919924f77e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T11:49:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lt4m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T11:49:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-xzh79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063720 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063763 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063775 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063791 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.063801 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166143 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166229 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166249 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.166264 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.172479 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=26.172453105 podStartE2EDuration="26.172453105s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:27.172085536 +0000 UTC m=+101.916033907" watchObservedRunningTime="2026-01-22 11:49:27.172453105 +0000 UTC m=+101.916401446" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.240414 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=26.24039668 podStartE2EDuration="26.24039668s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:27.238406301 +0000 UTC m=+101.982354832" watchObservedRunningTime="2026-01-22 11:49:27.24039668 +0000 UTC m=+101.984345021" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268670 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268712 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268721 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268735 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.268746 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.272450 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podStartSLOduration=82.272425105 podStartE2EDuration="1m22.272425105s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:27.271700528 +0000 UTC m=+102.015648869" watchObservedRunningTime="2026-01-22 11:49:27.272425105 +0000 UTC m=+102.016373446" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.370974 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371048 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371071 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.371085 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473047 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473097 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473127 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.473141 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571413 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571605 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571641 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571697 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571715 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571788 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.571868 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:27 crc kubenswrapper[5120]: E0122 11:49:27.571966 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575377 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575392 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.575404 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.677909 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.677984 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.677997 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.678012 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.678022 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780454 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780544 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780559 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.780589 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883396 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883439 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883450 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883464 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.883475 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985759 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985812 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985821 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:27 crc kubenswrapper[5120]: I0122 11:49:27.985845 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:27Z","lastTransitionTime":"2026-01-22T11:49:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088575 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088635 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088648 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088672 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.088687 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191720 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191731 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191747 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.191757 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.293942 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294018 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294033 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.294063 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395827 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395878 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395891 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395907 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.395918 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498116 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498133 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498149 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.498160 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601018 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601057 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601067 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601083 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.601092 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704389 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704452 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704466 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704544 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.704569 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806509 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806568 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806579 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806600 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.806617 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909215 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909274 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909307 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909326 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:28 crc kubenswrapper[5120]: I0122 11:49:28.909339 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:28Z","lastTransitionTime":"2026-01-22T11:49:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.002270 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tf9nb" event={"ID":"f9f485fd-0793-40a0-abf8-12fd3b612c87","Type":"ContainerStarted","Data":"5a26a20f8db539ea64a8dabdc450533dc213011b1ea84582f770f8da2b853204"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.004515 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerStarted","Data":"53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.004588 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerStarted","Data":"b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.010912 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.010997 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.011009 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.011026 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.011037 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.041779 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=28.041742101 podStartE2EDuration="28.041742101s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.041167016 +0000 UTC m=+103.785115357" watchObservedRunningTime="2026-01-22 11:49:29.041742101 +0000 UTC m=+103.785690452" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.075120 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=28.075088328 podStartE2EDuration="28.075088328s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.055693783 +0000 UTC m=+103.799642144" watchObservedRunningTime="2026-01-22 11:49:29.075088328 +0000 UTC m=+103.819036709" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.104524 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tf9nb" podStartSLOduration=84.104504339 podStartE2EDuration="1m24.104504339s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.090600018 +0000 UTC m=+103.834548369" watchObservedRunningTime="2026-01-22 11:49:29.104504339 +0000 UTC m=+103.848452680" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113218 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113295 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113312 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113347 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.113366 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.124299 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podStartSLOduration=84.124278213 podStartE2EDuration="1m24.124278213s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:29.124199691 +0000 UTC m=+103.868148032" watchObservedRunningTime="2026-01-22 11:49:29.124278213 +0000 UTC m=+103.868226554" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215010 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215081 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215098 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.215109 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317777 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317871 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317894 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317925 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.317948 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421213 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421269 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421279 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421300 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.421311 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523697 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523765 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523779 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.523998 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.524015 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571109 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571235 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571310 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571356 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571464 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571521 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.571552 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:29 crc kubenswrapper[5120]: E0122 11:49:29.571628 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626581 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626637 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626649 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626665 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.626674 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729056 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729065 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.729092 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831417 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831472 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831484 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831504 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.831516 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.933943 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934007 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934016 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934034 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:29 crc kubenswrapper[5120]: I0122 11:49:29.934045 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:29Z","lastTransitionTime":"2026-01-22T11:49:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035524 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035573 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035582 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035595 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.035605 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138315 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138385 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138399 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138419 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.138435 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241118 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241182 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241199 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241218 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.241235 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344265 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344394 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344423 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.344433 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447743 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447817 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447831 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447853 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.447871 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550393 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550457 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550470 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550490 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.550503 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.573216 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:30 crc kubenswrapper[5120]: E0122 11:49:30.573585 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.654781 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655349 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655367 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655389 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.655404 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757766 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757835 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757848 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757870 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.757883 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860110 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860179 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860192 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.860230 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.963949 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964054 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964072 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964098 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:30 crc kubenswrapper[5120]: I0122 11:49:30.964110 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:30Z","lastTransitionTime":"2026-01-22T11:49:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.015024 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"b9f937f5e3872af6c060d152d7740bf273be6070248e28fee7ad3af6a194ef09"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.018557 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wrdkl" event={"ID":"eaa5719f-fed8-44ac-a759-d2c22d9a2a7f","Type":"ContainerStarted","Data":"b11f230eb0d79f0c57e2b3e60b36d832b324f6a02f94ba8d75924b3605e32a7d"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.021754 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="310ec001d9a4dce7a548d57b1f0b1cdcd52e5b7937bc72e95db5b1033742786b" exitCode=0 Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.021851 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"310ec001d9a4dce7a548d57b1f0b1cdcd52e5b7937bc72e95db5b1033742786b"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067147 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067211 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067225 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.067261 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.077980 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wrdkl" podStartSLOduration=87.077931026 podStartE2EDuration="1m27.077931026s" podCreationTimestamp="2026-01-22 11:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:31.053079027 +0000 UTC m=+105.797027428" watchObservedRunningTime="2026-01-22 11:49:31.077931026 +0000 UTC m=+105.821879367" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.169994 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170063 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170080 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170102 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.170120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273432 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273503 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273539 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.273554 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376811 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376873 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376888 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376905 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.376917 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480321 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480422 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480444 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480482 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.480505 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.571014 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.571644 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.571729 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.572049 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.572714 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.572900 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.573098 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:31 crc kubenswrapper[5120]: E0122 11:49:31.573241 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583556 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583640 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583653 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583671 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.583682 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686729 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686801 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686822 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686852 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.686873 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.789935 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790020 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790060 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.790072 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.892914 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893044 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.893059 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996632 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996691 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996707 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996728 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:31 crc kubenswrapper[5120]: I0122 11:49:31.996746 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:31Z","lastTransitionTime":"2026-01-22T11:49:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.028130 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="3893663ea5a85fdb7a9ba62aff94b278d0d941f8da598a8444fcdaaa8a0a96fa" exitCode=0 Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.028238 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"3893663ea5a85fdb7a9ba62aff94b278d0d941f8da598a8444fcdaaa8a0a96fa"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.033467 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c76cdb48f202911a3d0b51441046ec86c1d066a9c70e94de7578c6d134092895"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.035700 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerStarted","Data":"d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.037516 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" exitCode=0 Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.037596 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.078418 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4lzht" podStartSLOduration=87.078382032 podStartE2EDuration="1m27.078382032s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:32.07831452 +0000 UTC m=+106.822262901" watchObservedRunningTime="2026-01-22 11:49:32.078382032 +0000 UTC m=+106.822330393" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100267 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100314 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100324 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100339 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.100349 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215536 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215553 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215570 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.215582 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322908 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322973 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.322988 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.323000 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425456 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425511 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425526 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425547 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.425562 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528342 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528414 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528426 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528449 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.528461 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631186 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631258 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631274 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631297 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.631314 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734469 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734480 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734499 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.734509 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837196 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837333 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837362 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.837384 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940481 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940564 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940582 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940610 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:32 crc kubenswrapper[5120]: I0122 11:49:32.940630 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:32Z","lastTransitionTime":"2026-01-22T11:49:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057002 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057144 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057162 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.057215 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161281 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161337 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161358 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161374 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.161385 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264443 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264513 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264667 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264765 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.264793 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.334875 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.335114 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335159 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335227 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335307 5120 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.335316 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335379 5120 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335481 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335405664 +0000 UTC m=+140.079354005 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335520 5120 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335526 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335508536 +0000 UTC m=+140.079456877 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.335579 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335669 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335673 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335649411 +0000 UTC m=+140.079597752 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335681 5120 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335694 5120 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.335747 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.335727652 +0000 UTC m=+140.079675993 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368374 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368459 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368479 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.368652 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.437414 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.437997 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.437843755 +0000 UTC m=+140.181792126 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.473726 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.473794 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.473813 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.474028 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.474048 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571700 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571684 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571936 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.571945 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.572505 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.572672 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.572990 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.573112 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576756 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576791 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576806 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.576844 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.602353 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="1e4c017d60fd56591949c3a9cb6fdffe623b4653c8a74d54fa756a0ec9f724be" exitCode=0 Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.602474 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"1e4c017d60fd56591949c3a9cb6fdffe623b4653c8a74d54fa756a0ec9f724be"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.607708 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.607756 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.640257 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.640507 5120 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: E0122 11:49:33.640561 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs podName:dababdca-8afb-452f-865f-54de3aec21d9 nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.640545801 +0000 UTC m=+140.384494152 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs") pod "network-metrics-daemon-ldwx4" (UID: "dababdca-8afb-452f-865f-54de3aec21d9") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678673 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678711 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678735 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.678744 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780352 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780402 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780413 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780429 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.780439 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.884976 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885027 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885040 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885056 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.885070 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.986997 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987051 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987064 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:33 crc kubenswrapper[5120]: I0122 11:49:33.987096 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:33Z","lastTransitionTime":"2026-01-22T11:49:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089366 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089432 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089447 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089467 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.089479 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192224 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192293 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192310 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192332 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.192346 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294661 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294754 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294769 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294796 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.294811 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404164 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404214 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404226 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.404254 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511832 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511845 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511862 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.511874 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.612450 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="74e5daf9f7179d097931f8055d630e02712aaa4ef010292832f9de7652b7cbdc" exitCode=0 Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.612539 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"74e5daf9f7179d097931f8055d630e02712aaa4ef010292832f9de7652b7cbdc"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614062 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614102 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614119 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614134 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.614147 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617097 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617148 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617157 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.617165 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717507 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717547 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717560 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717577 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.717590 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.819926 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.819990 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.820004 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.820022 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.820034 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927815 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927896 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927916 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927941 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:34 crc kubenswrapper[5120]: I0122 11:49:34.927975 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:34Z","lastTransitionTime":"2026-01-22T11:49:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030158 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030237 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030286 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.030301 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133278 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133353 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133373 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133396 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.133410 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235595 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235654 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235672 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235696 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.235714 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338673 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338748 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338766 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338790 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.338806 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444711 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444808 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444827 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444867 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.444882 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548212 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548274 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548288 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548311 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.548325 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.573704 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.573842 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.573856 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.574019 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.574209 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.574369 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.574415 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:35 crc kubenswrapper[5120]: E0122 11:49:35.574474 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.628008 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerStarted","Data":"e3c3c822d2a64996a2c76d93e02f2509fd39119c3b5870208ceeb5df9ac81da7"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652774 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652849 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652865 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652888 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.652910 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755037 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755092 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755109 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.755120 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857334 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857719 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857829 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.857948 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.858088 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960633 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960706 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960722 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960743 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:35 crc kubenswrapper[5120]: I0122 11:49:35.960759 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:35Z","lastTransitionTime":"2026-01-22T11:49:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063322 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063383 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063395 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063408 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.063424 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166244 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166291 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166302 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166316 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.166326 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269006 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269306 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269421 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269517 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.269607 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.372981 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373059 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373082 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373109 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.373127 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475190 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475234 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475246 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475261 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.475272 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526191 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526241 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526253 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526270 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.526282 5120 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T11:49:36Z","lastTransitionTime":"2026-01-22T11:49:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.573668 5120 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.574895 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4"] Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.577727 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.580694 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.581525 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.582075 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.582104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.585278 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.643094 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.645071 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="e3c3c822d2a64996a2c76d93e02f2509fd39119c3b5870208ceeb5df9ac81da7" exitCode=0 Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.645105 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"e3c3c822d2a64996a2c76d93e02f2509fd39119c3b5870208ceeb5df9ac81da7"} Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679853 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679899 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24792411-b989-4171-80eb-92ec2002d172-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.679942 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24792411-b989-4171-80eb-92ec2002d172-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.680018 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24792411-b989-4171-80eb-92ec2002d172-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780731 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780779 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24792411-b989-4171-80eb-92ec2002d172-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780805 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.780827 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24792411-b989-4171-80eb-92ec2002d172-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.781061 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.781107 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24792411-b989-4171-80eb-92ec2002d172-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.781175 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/24792411-b989-4171-80eb-92ec2002d172-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.782090 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24792411-b989-4171-80eb-92ec2002d172-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.796786 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24792411-b989-4171-80eb-92ec2002d172-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.802490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24792411-b989-4171-80eb-92ec2002d172-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qpcc4\" (UID: \"24792411-b989-4171-80eb-92ec2002d172\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: I0122 11:49:36.891349 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" Jan 22 11:49:36 crc kubenswrapper[5120]: W0122 11:49:36.932080 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24792411_b989_4171_80eb_92ec2002d172.slice/crio-b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f WatchSource:0}: Error finding container b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f: Status 404 returned error can't find the container with id b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571250 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571261 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571301 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.571385 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.571505 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.571725 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.571786 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:37 crc kubenswrapper[5120]: E0122 11:49:37.572001 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.654344 5120 generic.go:358] "Generic (PLEG): container finished" podID="97df0621-ddba-4462-8134-59bc671c7351" containerID="fbe1c72e23aac177d08f8889b1c095634d89ea3a7fa0c703aa47e19a45c6274c" exitCode=0 Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.654485 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerDied","Data":"fbe1c72e23aac177d08f8889b1c095634d89ea3a7fa0c703aa47e19a45c6274c"} Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.656783 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" event={"ID":"24792411-b989-4171-80eb-92ec2002d172","Type":"ContainerStarted","Data":"88e29354fcf1df2f1d68a6d530f454844c505540da80d37c683ddef0606d2cb4"} Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.656850 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" event={"ID":"24792411-b989-4171-80eb-92ec2002d172","Type":"ContainerStarted","Data":"b87a1b613da6f82f7d0ff920d604a739c14a6e26e4c01c3c89773fe8fbe2037f"} Jan 22 11:49:37 crc kubenswrapper[5120]: I0122 11:49:37.705214 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qpcc4" podStartSLOduration=92.705194912 podStartE2EDuration="1m32.705194912s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:37.704096026 +0000 UTC m=+112.448044387" watchObservedRunningTime="2026-01-22 11:49:37.705194912 +0000 UTC m=+112.449143253" Jan 22 11:49:38 crc kubenswrapper[5120]: I0122 11:49:38.666292 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rg989" event={"ID":"97df0621-ddba-4462-8134-59bc671c7351","Type":"ContainerStarted","Data":"9f28c7cde882aaba8df3805668fda0e1e1c980daebff4ea6b32dec7ab2b631de"} Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571377 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571432 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571377 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571580 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.571380 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571673 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571490 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:39 crc kubenswrapper[5120]: E0122 11:49:39.571677 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.673409 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerStarted","Data":"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42"} Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.673887 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.673989 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.701903 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rg989" podStartSLOduration=94.70188464 podStartE2EDuration="1m34.70188464s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:38.691403419 +0000 UTC m=+113.435351780" watchObservedRunningTime="2026-01-22 11:49:39.70188464 +0000 UTC m=+114.445832981" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.702343 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podStartSLOduration=94.702337832 podStartE2EDuration="1m34.702337832s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:39.700023855 +0000 UTC m=+114.443972206" watchObservedRunningTime="2026-01-22 11:49:39.702337832 +0000 UTC m=+114.446286173" Jan 22 11:49:39 crc kubenswrapper[5120]: I0122 11:49:39.740903 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:40 crc kubenswrapper[5120]: I0122 11:49:40.676925 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:40 crc kubenswrapper[5120]: I0122 11:49:40.712302 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.571768 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.571994 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.572173 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.572287 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.572358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:41 crc kubenswrapper[5120]: I0122 11:49:41.572402 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.572464 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:41 crc kubenswrapper[5120]: E0122 11:49:41.572564 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:42 crc kubenswrapper[5120]: I0122 11:49:42.196712 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ldwx4"] Jan 22 11:49:42 crc kubenswrapper[5120]: I0122 11:49:42.197751 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:42 crc kubenswrapper[5120]: E0122 11:49:42.197945 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:42 crc kubenswrapper[5120]: I0122 11:49:42.571923 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.571993 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.572020 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.571993 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.572117 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572104 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572186 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572250 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:43 crc kubenswrapper[5120]: E0122 11:49:43.572411 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.690903 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.692852 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b"} Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.693279 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:43 crc kubenswrapper[5120]: I0122 11:49:43.714541 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=42.714520027 podStartE2EDuration="42.714520027s" podCreationTimestamp="2026-01-22 11:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:49:43.713247925 +0000 UTC m=+118.457196326" watchObservedRunningTime="2026-01-22 11:49:43.714520027 +0000 UTC m=+118.458468378" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.537848 5120 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.572828 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.572948 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.573018 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573059 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573099 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:45 crc kubenswrapper[5120]: I0122 11:49:45.573163 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573218 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.573403 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:45 crc kubenswrapper[5120]: E0122 11:49:45.634885 5120 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572311 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572659 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572656 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572752 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572824 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:47 crc kubenswrapper[5120]: I0122 11:49:47.572849 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:47 crc kubenswrapper[5120]: E0122 11:49:47.572892 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.571667 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.571813 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ldwx4" podUID="dababdca-8afb-452f-865f-54de3aec21d9" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.571664 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.571867 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.572005 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.572061 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 22 11:49:49 crc kubenswrapper[5120]: I0122 11:49:49.572688 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:49 crc kubenswrapper[5120]: E0122 11:49:49.572847 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.571511 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.571677 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.571727 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.572309 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.575561 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.575830 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.576259 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.576743 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.577115 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 11:49:51 crc kubenswrapper[5120]: I0122 11:49:51.577530 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 11:49:54 crc kubenswrapper[5120]: I0122 11:49:54.707901 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.735094 5120 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.770460 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.787205 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xmvfk"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.787389 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791087 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791106 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791551 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.791633 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.793810 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-x2rhp"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.794046 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.796805 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.796973 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.799424 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.799620 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.802100 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.802297 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.804492 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806109 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806368 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806492 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.806820 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.813937 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.814411 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.815319 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.824435 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-6q5kp"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.824998 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.827406 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-rkbh2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.828032 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.828869 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.829231 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.829460 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.829812 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.830176 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.830381 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.834529 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.852940 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853171 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853694 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853835 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853874 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.853924 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854003 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854022 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854203 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854320 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854339 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-btnnz"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854426 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854502 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854540 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854514 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854750 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.854823 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.855114 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.855224 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.855461 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857122 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857188 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857329 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857419 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857419 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857480 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857568 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857710 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857806 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857894 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857378 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858080 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.857813 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858042 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858293 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858336 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858407 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858244 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858636 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.858501 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859010 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859054 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-p98m2"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859154 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.859188 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.860076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862436 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862564 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862621 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.862738 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.863990 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.864833 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.870096 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.872065 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.880881 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.885980 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qnw8\" (UniqueName: \"kubernetes.io/projected/dfeef834-363c-4dff-a170-acd203607c65-kube-api-access-8qnw8\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886025 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-image-import-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886045 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba096274-efe0-462b-9a53-89e321166944-config\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886062 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886087 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-audit-dir\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886127 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-audit-policies\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886158 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886179 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886201 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886224 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeef834-363c-4dff-a170-acd203607c65-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886250 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba096274-efe0-462b-9a53-89e321166944-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886279 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886300 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-encryption-config\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886328 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c07a3946-e1f2-458f-bc29-15741de2605c-audit-dir\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886346 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886362 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-trusted-ca\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886379 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-serving-cert\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886395 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886432 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886519 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-client\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886608 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-config\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886656 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-auth-proxy-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886686 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886702 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-serving-cert\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886721 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-config\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886743 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba096274-efe0-462b-9a53-89e321166944-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886759 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886807 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886823 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae478ef7-56ef-496c-b99c-4d952d5617b0-serving-cert\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886841 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwltw\" (UniqueName: \"kubernetes.io/projected/ae478ef7-56ef-496c-b99c-4d952d5617b0-kube-api-access-kwltw\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886858 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-serving-ca\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886874 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67mcj\" (UniqueName: \"kubernetes.io/projected/c07a3946-e1f2-458f-bc29-15741de2605c-kube-api-access-67mcj\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886889 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886922 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.886942 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfx8n\" (UniqueName: \"kubernetes.io/projected/ba096274-efe0-462b-9a53-89e321166944-kube-api-access-dfx8n\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887015 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vvb\" (UniqueName: \"kubernetes.io/projected/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-kube-api-access-87vvb\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-images\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887073 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-audit\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887097 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-client\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887112 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887128 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887151 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f67b\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-kube-api-access-5f67b\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887051 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887169 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887192 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-machine-approver-tls\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887212 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5dr\" (UniqueName: \"kubernetes.io/projected/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-kube-api-access-9h5dr\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887237 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887258 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887273 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54jwq\" (UniqueName: \"kubernetes.io/projected/fd113660-b734-4d86-be8d-b28c5e9a328f-kube-api-access-54jwq\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887291 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887325 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887362 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-encryption-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.887699 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.892590 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.892804 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.897627 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.903522 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906278 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906349 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906533 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906709 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906757 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906861 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.906913 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.907536 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.908264 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.909114 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.910863 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.911329 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.911808 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.912682 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.912915 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.913013 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.913063 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.924627 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.924848 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925451 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925712 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925820 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.925914 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.926016 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.926098 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.929397 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.929684 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.935291 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.935792 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.936061 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.937390 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.937493 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r4999"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.938642 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.938994 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939095 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939165 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939033 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.939342 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.947431 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.947753 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.947976 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.966340 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.967067 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.967263 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.967774 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.969821 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.970838 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.970941 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.975217 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.977082 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.977500 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7"] Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991123 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-auth-proxy-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991167 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991192 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-serving-cert\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991216 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-config\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991240 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af7812b-a785-44ec-a8eb-eb72b9958b01-serving-cert\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991264 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba096274-efe0-462b-9a53-89e321166944-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991284 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991317 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld95q\" (UniqueName: \"kubernetes.io/projected/ea345128-daaf-464a-b774-8f8cf4c34aa5-kube-api-access-ld95q\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991340 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991377 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae478ef7-56ef-496c-b99c-4d952d5617b0-serving-cert\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991423 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwltw\" (UniqueName: \"kubernetes.io/projected/ae478ef7-56ef-496c-b99c-4d952d5617b0-kube-api-access-kwltw\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991444 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-serving-ca\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991480 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-67mcj\" (UniqueName: \"kubernetes.io/projected/c07a3946-e1f2-458f-bc29-15741de2605c-kube-api-access-67mcj\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991500 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991525 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-serving-cert\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991547 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991568 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/699a5d41-d0b5-4d88-9448-4b3bad2cc424-metrics-tls\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991584 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea345128-daaf-464a-b774-8f8cf4c34aa5-serving-cert\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991603 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991641 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991661 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfx8n\" (UniqueName: \"kubernetes.io/projected/ba096274-efe0-462b-9a53-89e321166944-kube-api-access-dfx8n\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991684 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/699a5d41-d0b5-4d88-9448-4b3bad2cc424-tmp-dir\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991708 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991743 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87vvb\" (UniqueName: \"kubernetes.io/projected/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-kube-api-access-87vvb\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991769 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-images\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-audit\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991818 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfb4z\" (UniqueName: \"kubernetes.io/projected/a1372d1c-9557-4da9-b571-ea78602f491f-kube-api-access-mfb4z\") pod \"downloads-747b44746d-btnnz\" (UID: \"a1372d1c-9557-4da9-b571-ea78602f491f\") " pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991853 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-client\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991875 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991896 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991941 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-client\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.991981 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5f67b\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-kube-api-access-5f67b\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992002 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992019 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kbp\" (UniqueName: \"kubernetes.io/projected/9af7812b-a785-44ec-a8eb-eb72b9958b01-kube-api-access-26kbp\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992040 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-machine-approver-tls\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992065 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5dr\" (UniqueName: \"kubernetes.io/projected/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-kube-api-access-9h5dr\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992086 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992110 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992130 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54jwq\" (UniqueName: \"kubernetes.io/projected/fd113660-b734-4d86-be8d-b28c5e9a328f-kube-api-access-54jwq\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992152 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d89f76-66b8-4ffa-a63e-13582811b819-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992178 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992198 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992219 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992222 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-config\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992238 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992258 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992280 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992301 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992327 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992345 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-encryption-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992365 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992392 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8qnw8\" (UniqueName: \"kubernetes.io/projected/dfeef834-363c-4dff-a170-acd203607c65-kube-api-access-8qnw8\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992416 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-image-import-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba096274-efe0-462b-9a53-89e321166944-config\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992467 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992513 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-audit-dir\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992535 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-audit-policies\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992559 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-tmp-dir\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992581 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992643 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeef834-363c-4dff-a170-acd203607c65-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992667 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba096274-efe0-462b-9a53-89e321166944-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992748 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kw26\" (UniqueName: \"kubernetes.io/projected/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-kube-api-access-2kw26\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992750 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-auth-proxy-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992771 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d89f76-66b8-4ffa-a63e-13582811b819-config\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992795 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992875 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-config\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992909 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-encryption-config\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.992950 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c07a3946-e1f2-458f-bc29-15741de2605c-audit-dir\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994011 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994038 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-trusted-ca\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994062 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994102 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-serving-cert\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994130 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994153 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ea345128-daaf-464a-b774-8f8cf4c34aa5-available-featuregates\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994176 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994472 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994500 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994523 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdkp\" (UniqueName: \"kubernetes.io/projected/699a5d41-d0b5-4d88-9448-4b3bad2cc424-kube-api-access-5rdkp\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994832 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994872 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994901 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-client\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994923 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-config\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.994947 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2hf\" (UniqueName: \"kubernetes.io/projected/42d89f76-66b8-4ffa-a63e-13582811b819-kube-api-access-9g2hf\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.995041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.995076 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-config\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.995773 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-config\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.996676 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.997684 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-config\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.998133 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:56 crc kubenswrapper[5120]: I0122 11:49:56.998585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-trusted-ca-bundle\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:56.999870 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.001258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.001310 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.001823 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-audit\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.002282 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.003037 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba096274-efe0-462b-9a53-89e321166944-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.003433 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-machine-approver-tls\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.003714 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.004228 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fd113660-b734-4d86-be8d-b28c5e9a328f-audit-dir\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.005128 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-image-import-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.005132 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dfeef834-363c-4dff-a170-acd203607c65-images\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.006198 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba096274-efe0-462b-9a53-89e321166944-config\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.006756 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-client\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.008577 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-encryption-config\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.009090 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.011629 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.012107 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.012586 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-serving-ca\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.013383 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-tmp\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.013666 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c07a3946-e1f2-458f-bc29-15741de2605c-audit-dir\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.014019 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.014138 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.014407 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba096274-efe0-462b-9a53-89e321166944-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015137 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-serving-cert\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015238 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae478ef7-56ef-496c-b99c-4d952d5617b0-serving-cert\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fd113660-b734-4d86-be8d-b28c5e9a328f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015539 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015692 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd113660-b734-4d86-be8d-b28c5e9a328f-serving-cert\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.015797 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.016042 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.016365 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ae478ef7-56ef-496c-b99c-4d952d5617b0-trusted-ca\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.017623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.018482 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.026045 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfeef834-363c-4dff-a170-acd203607c65-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.029165 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-7q8jr"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.030076 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.030338 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-etcd-client\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.034516 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.034641 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c07a3946-e1f2-458f-bc29-15741de2605c-audit-policies\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.034731 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.035176 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.036060 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.039094 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.039620 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-7x2rm"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.039966 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.045391 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c07a3946-e1f2-458f-bc29-15741de2605c-encryption-config\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.045890 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.046035 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.050590 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.050605 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.050825 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.053527 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.054621 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.057428 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.057572 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.061337 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.061409 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.064013 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.064097 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.066888 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.066949 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.070278 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.070464 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dp8rm"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.070636 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.075146 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.075281 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.077570 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.077695 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.080194 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.080286 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.082629 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.082756 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.084828 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-llz79"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.084903 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087105 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087134 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087147 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087158 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087170 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-x2rhp"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087181 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xmvfk"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087196 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.087358 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.092278 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8wqc7"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.092435 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096464 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-6q5kp"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096498 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-p98m2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096513 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096526 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-btnnz"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096535 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-rkbh2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096547 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096586 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.096655 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lsqq6"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099538 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099561 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099572 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-d4ftw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099655 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100023 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.099938 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100081 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-config\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100099 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2hf\" (UniqueName: \"kubernetes.io/projected/42d89f76-66b8-4ffa-a63e-13582811b819-kube-api-access-9g2hf\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100160 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100182 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100296 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af7812b-a785-44ec-a8eb-eb72b9958b01-serving-cert\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100443 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ld95q\" (UniqueName: \"kubernetes.io/projected/ea345128-daaf-464a-b774-8f8cf4c34aa5-kube-api-access-ld95q\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100475 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100556 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-serving-cert\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100583 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/699a5d41-d0b5-4d88-9448-4b3bad2cc424-metrics-tls\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100637 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea345128-daaf-464a-b774-8f8cf4c34aa5-serving-cert\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100673 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100721 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/699a5d41-d0b5-4d88-9448-4b3bad2cc424-tmp-dir\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100747 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100802 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mfb4z\" (UniqueName: \"kubernetes.io/projected/a1372d1c-9557-4da9-b571-ea78602f491f-kube-api-access-mfb4z\") pod \"downloads-747b44746d-btnnz\" (UID: \"a1372d1c-9557-4da9-b571-ea78602f491f\") " pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100832 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100868 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-client\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100895 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26kbp\" (UniqueName: \"kubernetes.io/projected/9af7812b-a785-44ec-a8eb-eb72b9958b01-kube-api-access-26kbp\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100929 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d89f76-66b8-4ffa-a63e-13582811b819-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100968 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.100992 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101019 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101087 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101132 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101158 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101191 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-tmp-dir\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101232 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101262 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2kw26\" (UniqueName: \"kubernetes.io/projected/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-kube-api-access-2kw26\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101291 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d89f76-66b8-4ffa-a63e-13582811b819-config\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101320 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101346 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101370 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101436 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-config\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101483 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101641 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ea345128-daaf-464a-b774-8f8cf4c34aa5-available-featuregates\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101666 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101701 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101726 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.101745 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rdkp\" (UniqueName: \"kubernetes.io/projected/699a5d41-d0b5-4d88-9448-4b3bad2cc424-kube-api-access-5rdkp\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102146 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ea345128-daaf-464a-b774-8f8cf4c34aa5-available-featuregates\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102774 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102923 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102972 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r4999"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102991 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103002 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103014 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103026 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103038 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-llz79"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103048 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103062 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7q8jr"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103074 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103083 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103091 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103100 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lsqq6"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103110 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103108 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d89f76-66b8-4ffa-a63e-13582811b819-config\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103120 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103175 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103194 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103206 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d4ftw"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103215 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103225 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103235 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103246 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lfqzp"] Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.103382 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.603368549 +0000 UTC m=+132.347316890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.103551 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-config\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.104056 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-tmp-dir\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.104177 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.104319 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.105304 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/699a5d41-d0b5-4d88-9448-4b3bad2cc424-metrics-tls\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.106290 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9af7812b-a785-44ec-a8eb-eb72b9958b01-serving-cert\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.106802 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.107519 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.106813 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/699a5d41-d0b5-4d88-9448-4b3bad2cc424-tmp-dir\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.107675 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea345128-daaf-464a-b774-8f8cf4c34aa5-serving-cert\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.108304 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.108619 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113394 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d89f76-66b8-4ffa-a63e-13582811b819-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113479 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.102969 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af7812b-a785-44ec-a8eb-eb72b9958b01-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.113868 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114014 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8wqc7"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114052 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dp8rm"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114065 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114076 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114129 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114140 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114254 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114364 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.114643 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.130506 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.150531 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.176918 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.190406 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202237 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.202396 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.702369015 +0000 UTC m=+132.446317376 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202533 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202572 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202591 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkxp\" (UniqueName: \"kubernetes.io/projected/a909382a-a9be-43ea-b525-c382d3d7dac9-kube-api-access-8rkxp\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202608 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bdf4dfdb-f473-480e-ae44-570e99cf695f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtrk4\" (UniqueName: \"kubernetes.io/projected/5e1bcfb8-8fae-4947-a078-c38b69596998-kube-api-access-rtrk4\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202659 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5f50cf9-ffda-418c-a80d-9612ce61d429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202694 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202711 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9hf\" (UniqueName: \"kubernetes.io/projected/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-kube-api-access-fp9hf\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202728 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-oauth-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202774 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202794 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202813 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-serving-cert\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202842 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-mountpoint-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202859 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhpd9\" (UniqueName: \"kubernetes.io/projected/c5f50cf9-ffda-418c-a80d-9612ce61d429-kube-api-access-dhpd9\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-profile-collector-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202896 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljlf\" (UniqueName: \"kubernetes.io/projected/2380d23f-8320-4c77-9936-215ff48a32c8-kube-api-access-mljlf\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-images\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.202989 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-stats-auth\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203009 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203026 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6edfa4a4-fdb6-420f-ba3b-d984c4784817-tmpfs\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203042 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-registration-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203070 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203088 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203106 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bdf4dfdb-f473-480e-ae44-570e99cf695f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203120 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203141 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203159 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203176 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203191 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e1bcfb8-8fae-4947-a078-c38b69596998-service-ca-bundle\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203220 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203236 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-socket-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203278 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203294 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203325 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rv6\" (UniqueName: \"kubernetes.io/projected/f7fc5383-db19-483a-afb9-23d3f8065a64-kube-api-access-n2rv6\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203342 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061945e1-c5cb-4451-94ff-0fd4a53b4901-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203359 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203376 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203399 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdgb\" (UniqueName: \"kubernetes.io/projected/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-kube-api-access-2hdgb\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203420 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-default-certificate\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203444 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw44v\" (UniqueName: \"kubernetes.io/projected/3cc31b0e-b225-470f-870b-f89666eae47b-kube-api-access-gw44v\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203471 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203506 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203541 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qm6\" (UniqueName: \"kubernetes.io/projected/da2b1465-54c1-4a7d-8cb6-755b28e448b8-kube-api-access-s7qm6\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203561 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203579 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203599 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203617 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcskr\" (UniqueName: \"kubernetes.io/projected/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-kube-api-access-jcskr\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203698 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-tmpfs\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203769 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-trusted-ca-bundle\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.203800 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7kn\" (UniqueName: \"kubernetes.io/projected/efec95f9-a526-41f9-bd7c-0d1bd2505eda-kube-api-access-rc7kn\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204421 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f7fc5383-db19-483a-afb9-23d3f8065a64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204454 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204538 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204562 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204595 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204619 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-plugins-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204683 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-config\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204709 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqccv\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-kube-api-access-pqccv\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204732 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4dfdb-f473-480e-ae44-570e99cf695f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204758 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204826 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmv2\" (UniqueName: \"kubernetes.io/projected/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-kube-api-access-ljmv2\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204830 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204876 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-metrics-certs\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.204904 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x68s\" (UniqueName: \"kubernetes.io/projected/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-kube-api-access-9x68s\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205001 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205030 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205055 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2380d23f-8320-4c77-9936-215ff48a32c8-tmp-dir\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205082 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b273aff-e733-49a9-a191-88b0380500eb-tmpfs\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205105 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-srv-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205129 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205173 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-oauth-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205222 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205254 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205313 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcrb\" (UniqueName: \"kubernetes.io/projected/6edfa4a4-fdb6-420f-ba3b-d984c4784817-kube-api-access-hxcrb\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205349 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205385 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205417 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061945e1-c5cb-4451-94ff-0fd4a53b4901-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205449 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205482 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4dfdb-f473-480e-ae44-570e99cf695f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205518 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6g7g\" (UniqueName: \"kubernetes.io/projected/7b273aff-e733-49a9-a191-88b0380500eb-kube-api-access-k6g7g\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205544 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2cpr\" (UniqueName: \"kubernetes.io/projected/d92ccf27-d679-4304-98b0-a6e74c7ffda2-kube-api-access-c2cpr\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205576 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bszmq\" (UniqueName: \"kubernetes.io/projected/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-kube-api-access-bszmq\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205600 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-service-ca\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8gv\" (UniqueName: \"kubernetes.io/projected/d245a73a-a6cb-488c-91aa-8b3020511b47-kube-api-access-5k8gv\") pod \"migrator-866fcbc849-dc6zt\" (UID: \"d245a73a-a6cb-488c-91aa-8b3020511b47\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.205860 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.70584815 +0000 UTC m=+132.449796711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205899 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205922 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205944 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.205982 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207347 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207588 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207641 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-csi-data-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.207765 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.208748 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.210509 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.212461 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.213185 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.231054 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.250046 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.257605 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-serving-cert\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.270911 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.280992 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-client\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.291379 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.294873 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-etcd-service-ca\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.308423 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.308557 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.808532096 +0000 UTC m=+132.552480437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.309792 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-stats-auth\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310204 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310334 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6edfa4a4-fdb6-420f-ba3b-d984c4784817-tmpfs\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-registration-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310577 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bdf4dfdb-f473-480e-ae44-570e99cf695f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310886 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-registration-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310993 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310479 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311149 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311152 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6edfa4a4-fdb6-420f-ba3b-d984c4784817-tmpfs\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311363 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.311712 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e1bcfb8-8fae-4947-a078-c38b69596998-service-ca-bundle\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312069 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-socket-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312192 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312192 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-socket-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.310612 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312455 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rv6\" (UniqueName: \"kubernetes.io/projected/f7fc5383-db19-483a-afb9-23d3f8065a64-kube-api-access-n2rv6\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312532 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061945e1-c5cb-4451-94ff-0fd4a53b4901-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312602 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312669 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.312748 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdgb\" (UniqueName: \"kubernetes.io/projected/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-kube-api-access-2hdgb\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313016 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-default-certificate\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gw44v\" (UniqueName: \"kubernetes.io/projected/3cc31b0e-b225-470f-870b-f89666eae47b-kube-api-access-gw44v\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313169 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313354 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qm6\" (UniqueName: \"kubernetes.io/projected/da2b1465-54c1-4a7d-8cb6-755b28e448b8-kube-api-access-s7qm6\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313532 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313601 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313751 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcskr\" (UniqueName: \"kubernetes.io/projected/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-kube-api-access-jcskr\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.313932 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.813916068 +0000 UTC m=+132.557864409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.313989 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-tmpfs\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314013 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-trusted-ca-bundle\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7kn\" (UniqueName: \"kubernetes.io/projected/efec95f9-a526-41f9-bd7c-0d1bd2505eda-kube-api-access-rc7kn\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314087 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f7fc5383-db19-483a-afb9-23d3f8065a64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314105 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314093 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314153 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314177 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314193 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314243 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-plugins-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314317 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-config\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314336 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pqccv\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-kube-api-access-pqccv\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314353 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4dfdb-f473-480e-ae44-570e99cf695f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314405 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314427 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314467 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmv2\" (UniqueName: \"kubernetes.io/projected/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-kube-api-access-ljmv2\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314502 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-metrics-certs\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314542 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/061945e1-c5cb-4451-94ff-0fd4a53b4901-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314619 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9x68s\" (UniqueName: \"kubernetes.io/projected/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-kube-api-access-9x68s\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314663 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314784 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2380d23f-8320-4c77-9936-215ff48a32c8-tmp-dir\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314808 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b273aff-e733-49a9-a191-88b0380500eb-tmpfs\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314830 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-srv-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.314949 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315001 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-oauth-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315021 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315139 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f7fc5383-db19-483a-afb9-23d3f8065a64-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315146 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315234 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-plugins-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315442 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcrb\" (UniqueName: \"kubernetes.io/projected/6edfa4a4-fdb6-420f-ba3b-d984c4784817-kube-api-access-hxcrb\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315541 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315698 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316063 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061945e1-c5cb-4451-94ff-0fd4a53b4901-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316347 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316466 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4dfdb-f473-480e-ae44-570e99cf695f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316155 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2380d23f-8320-4c77-9936-215ff48a32c8-tmp-dir\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-tmpfs\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315564 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316593 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdf4dfdb-f473-480e-ae44-570e99cf695f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.315873 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7b273aff-e733-49a9-a191-88b0380500eb-tmpfs\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316806 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6g7g\" (UniqueName: \"kubernetes.io/projected/7b273aff-e733-49a9-a191-88b0380500eb-kube-api-access-k6g7g\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.316901 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c2cpr\" (UniqueName: \"kubernetes.io/projected/d92ccf27-d679-4304-98b0-a6e74c7ffda2-kube-api-access-c2cpr\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bszmq\" (UniqueName: \"kubernetes.io/projected/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-kube-api-access-bszmq\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317139 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-service-ca\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317247 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8gv\" (UniqueName: \"kubernetes.io/projected/d245a73a-a6cb-488c-91aa-8b3020511b47-kube-api-access-5k8gv\") pod \"migrator-866fcbc849-dc6zt\" (UID: \"d245a73a-a6cb-488c-91aa-8b3020511b47\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317560 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.318478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.317752 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319173 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319279 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-csi-data-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319419 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319449 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-csi-data-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-tmp-dir\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319529 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319897 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkxp\" (UniqueName: \"kubernetes.io/projected/a909382a-a9be-43ea-b525-c382d3d7dac9-kube-api-access-8rkxp\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.319989 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bdf4dfdb-f473-480e-ae44-570e99cf695f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rtrk4\" (UniqueName: \"kubernetes.io/projected/5e1bcfb8-8fae-4947-a078-c38b69596998-kube-api-access-rtrk4\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320214 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5f50cf9-ffda-418c-a80d-9612ce61d429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320259 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/bdf4dfdb-f473-480e-ae44-570e99cf695f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320102 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf4dfdb-f473-480e-ae44-570e99cf695f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320314 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/061945e1-c5cb-4451-94ff-0fd4a53b4901-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320550 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320583 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fp9hf\" (UniqueName: \"kubernetes.io/projected/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-kube-api-access-fp9hf\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320604 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-oauth-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320623 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320651 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320678 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320700 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-serving-cert\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320723 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-mountpoint-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320751 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dhpd9\" (UniqueName: \"kubernetes.io/projected/c5f50cf9-ffda-418c-a80d-9612ce61d429-kube-api-access-dhpd9\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320777 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-profile-collector-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320810 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mljlf\" (UniqueName: \"kubernetes.io/projected/2380d23f-8320-4c77-9936-215ff48a32c8-kube-api-access-mljlf\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320834 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-images\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.320894 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-mountpoint-dir\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.330878 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.350273 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.351326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-config\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.383233 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"route-controller-manager-776cdc94d6-fzgnb\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.405632 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54jwq\" (UniqueName: \"kubernetes.io/projected/fd113660-b734-4d86-be8d-b28c5e9a328f-kube-api-access-54jwq\") pod \"apiserver-9ddfb9f55-xmvfk\" (UID: \"fd113660-b734-4d86-be8d-b28c5e9a328f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.421661 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.421896 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.921869863 +0000 UTC m=+132.665818204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.422471 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.422939 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.422999 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:57.922884858 +0000 UTC m=+132.666833209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.445322 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"controller-manager-65b6cccf98-xw8v9\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.473848 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfx8n\" (UniqueName: \"kubernetes.io/projected/ba096274-efe0-462b-9a53-89e321166944-kube-api-access-dfx8n\") pod \"openshift-controller-manager-operator-686468bdd5-mngf2\" (UID: \"ba096274-efe0-462b-9a53-89e321166944\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.484016 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f67b\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-kube-api-access-5f67b\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.504731 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0b427d7e-8e8a-4486-831a-aa6cc98f1b39-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-bg8p2\" (UID: \"0b427d7e-8e8a-4486-831a-aa6cc98f1b39\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.523766 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.524272 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.024244291 +0000 UTC m=+132.768192632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.524571 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-67mcj\" (UniqueName: \"kubernetes.io/projected/c07a3946-e1f2-458f-bc29-15741de2605c-kube-api-access-67mcj\") pod \"apiserver-8596bd845d-tfhpf\" (UID: \"c07a3946-e1f2-458f-bc29-15741de2605c\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.533096 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.535018 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.545038 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5dr\" (UniqueName: \"kubernetes.io/projected/eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9-kube-api-access-9h5dr\") pod \"cluster-samples-operator-6b564684c8-7smqb\" (UID: \"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.566104 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qnw8\" (UniqueName: \"kubernetes.io/projected/dfeef834-363c-4dff-a170-acd203607c65-kube-api-access-8qnw8\") pod \"machine-api-operator-755bb95488-x2rhp\" (UID: \"dfeef834-363c-4dff-a170-acd203607c65\") " pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.586667 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87vvb\" (UniqueName: \"kubernetes.io/projected/e2d50ff8-e389-4ca8-8a4f-6987db07ea3b-kube-api-access-87vvb\") pod \"machine-approver-54c688565-ll2j2\" (UID: \"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.611228 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.614177 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwltw\" (UniqueName: \"kubernetes.io/projected/ae478ef7-56ef-496c-b99c-4d952d5617b0-kube-api-access-kwltw\") pod \"console-operator-67c89758df-6q5kp\" (UID: \"ae478ef7-56ef-496c-b99c-4d952d5617b0\") " pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.625789 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.626132 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.126120198 +0000 UTC m=+132.870068539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.640059 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.650675 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.660826 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.671180 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.677919 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.697428 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.708711 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.712434 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.732265 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.732439 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.232417613 +0000 UTC m=+132.976365954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.732819 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.735161 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.735639 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.235626631 +0000 UTC m=+132.979574972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.735639 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-xmvfk"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.735744 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" Jan 22 11:49:57 crc kubenswrapper[5120]: W0122 11:49:57.744295 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd113660_b734_4d86_be8d_b28c5e9a328f.slice/crio-8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b WatchSource:0}: Error finding container 8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b: Status 404 returned error can't find the container with id 8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.747580 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-serving-cert\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.751609 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.756417 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-config\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.765535 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.772354 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.772475 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.778820 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-oauth-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.785126 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.791390 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.792012 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.798111 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.811158 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.819465 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2"] Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.829262 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-serving-cert\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.830947 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.835536 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-oauth-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.836045 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.836682 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.336667588 +0000 UTC m=+133.080615929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: W0122 11:49:57.840058 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36a1cae_0915_45b1_abf9_2f44c78f3306.slice/crio-2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83 WatchSource:0}: Error finding container 2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83: Status 404 returned error can't find the container with id 2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83 Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.849009 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.851297 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.853596 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-console-config\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.873474 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.878459 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-service-ca\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.899220 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.910556 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efec95f9-a526-41f9-bd7c-0d1bd2505eda-trusted-ca-bundle\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.911979 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.934360 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.937668 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:57 crc kubenswrapper[5120]: E0122 11:49:57.938101 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.438088243 +0000 UTC m=+133.182036584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.951661 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.972642 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.993551 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 11:49:57 crc kubenswrapper[5120]: I0122 11:49:57.995119 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-stats-auth\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.013600 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-default-certificate\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.016418 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.033726 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.044087 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.044629 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.544612773 +0000 UTC m=+133.288561114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.054320 5120 request.go:752] "Waited before sending request" delay="1.00797109s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&limit=500&resourceVersion=0" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.055723 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.055776 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.062700 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e1bcfb8-8fae-4947-a078-c38b69596998-metrics-certs\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.073271 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.073867 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.091094 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.099927 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e1bcfb8-8fae-4947-a078-c38b69596998-service-ca-bundle\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.111367 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.112442 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c5f50cf9-ffda-418c-a80d-9612ce61d429-images\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.131090 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.131869 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 11:49:58 crc kubenswrapper[5120]: W0122 11:49:58.133391 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba096274_efe0_462b_9a53_89e321166944.slice/crio-fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c WatchSource:0}: Error finding container fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c: Status 404 returned error can't find the container with id fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.146264 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.146621 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.646605582 +0000 UTC m=+133.390553933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.151516 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.159673 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-x2rhp"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.169164 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5f50cf9-ffda-418c-a80d-9612ce61d429-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:58 crc kubenswrapper[5120]: W0122 11:49:58.169273 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07a3946_e1f2_458f_bc29_15741de2605c.slice/crio-b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73 WatchSource:0}: Error finding container b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73: Status 404 returned error can't find the container with id b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.170829 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.191497 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.194292 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.210876 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.220042 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-6q5kp"] Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.232271 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: W0122 11:49:58.248012 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae478ef7_56ef_496c_b99c_4d952d5617b0.slice/crio-1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031 WatchSource:0}: Error finding container 1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031: Status 404 returned error can't find the container with id 1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.248149 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.248415 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.748393287 +0000 UTC m=+133.492341738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.250836 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.251769 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.751745819 +0000 UTC m=+133.495694170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.251850 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.257998 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.274442 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.290638 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.291790 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-srv-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.295926 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6edfa4a4-fdb6-420f-ba3b-d984c4784817-profile-collector-cert\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.300439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.305234 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312307 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312421 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert podName:7b273aff-e733-49a9-a191-88b0380500eb nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.812390964 +0000 UTC m=+133.556339305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert") pod "packageserver-7d4fc7d867-bbphb" (UID: "7b273aff-e733-49a9-a191-88b0380500eb") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.312484 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312542 5120 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.312573 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert podName:503a8f02-4faa-4c71-a07b-e5cf7e21fd01 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.812566139 +0000 UTC m=+133.556514480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert") pod "ingress-canary-8wqc7" (UID: "503a8f02-4faa-4c71-a07b-e5cf7e21fd01") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.313091 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.313131 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert podName:52bf18ab-85c0-49e5-8b9d-9cb67ec54297 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.813122133 +0000 UTC m=+133.557070474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-9hjpw" (UID: "52bf18ab-85c0-49e5-8b9d-9cb67ec54297") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314425 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314465 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert podName:7b273aff-e733-49a9-a191-88b0380500eb nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.814455276 +0000 UTC m=+133.558403617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert") pod "packageserver-7d4fc7d867-bbphb" (UID: "7b273aff-e733-49a9-a191-88b0380500eb") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314484 5120 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314508 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config podName:e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.814503087 +0000 UTC m=+133.558451428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config") pod "service-ca-operator-5b9c976747-7ghwq" (UID: "e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314544 5120 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314567 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca podName:17d1692e-e64c-415e-98c6-fc0e5c799fe0 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.814561528 +0000 UTC m=+133.558509869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-dpf6p" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.314988 5120 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.315022 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume podName:2667e960-0d1a-4c78-97ea-b1852f27ce17 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.815013619 +0000 UTC m=+133.558961960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume") pod "collect-profiles-29484705-g489w" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.315800 5120 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.315985 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls podName:2380d23f-8320-4c77-9936-215ff48a32c8 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.815922371 +0000 UTC m=+133.559870882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls") pod "dns-default-d4ftw" (UID: "2380d23f-8320-4c77-9936-215ff48a32c8") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316036 5120 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316073 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle podName:d92ccf27-d679-4304-98b0-a6e74c7ffda2 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816063805 +0000 UTC m=+133.560012326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle") pod "service-ca-74545575db-llz79" (UID: "d92ccf27-d679-4304-98b0-a6e74c7ffda2") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316085 5120 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316109 5120 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316126 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls podName:3cc31b0e-b225-470f-870b-f89666eae47b nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816118116 +0000 UTC m=+133.560066457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-75ffdb6fcd-fhxb8" (UID: "3cc31b0e-b225-470f-870b-f89666eae47b") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316143 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls podName:f7fc5383-db19-483a-afb9-23d3f8065a64 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816134746 +0000 UTC m=+133.560083297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls") pod "machine-config-controller-f9cdd68f7-kprrg" (UID: "f7fc5383-db19-483a-afb9-23d3f8065a64") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316153 5120 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.316177 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token podName:a909382a-a9be-43ea-b525-c382d3d7dac9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.816171757 +0000 UTC m=+133.560120098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token") pod "machine-config-server-lfqzp" (UID: "a909382a-a9be-43ea-b525-c382d3d7dac9") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.316201 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb"] Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322240 5120 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322334 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs podName:da2b1465-54c1-4a7d-8cb6-755b28e448b8 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822310777 +0000 UTC m=+133.566259118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs") pod "multus-admission-controller-69db94689b-dp8rm" (UID: "da2b1465-54c1-4a7d-8cb6-755b28e448b8") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322364 5120 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322392 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs podName:a909382a-a9be-43ea-b525-c382d3d7dac9 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822383339 +0000 UTC m=+133.566331880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs") pod "machine-config-server-lfqzp" (UID: "a909382a-a9be-43ea-b525-c382d3d7dac9") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322415 5120 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322447 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume podName:2380d23f-8320-4c77-9936-215ff48a32c8 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.82243831 +0000 UTC m=+133.566386861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume") pod "dns-default-d4ftw" (UID: "2380d23f-8320-4c77-9936-215ff48a32c8") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322554 5120 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322586 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist podName:48ce43ae-5f5f-4ae6-91bd-98390a12c650 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822578035 +0000 UTC m=+133.566526716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-mddkn" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650") : failed to sync configmap cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322603 5120 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322629 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key podName:d92ccf27-d679-4304-98b0-a6e74c7ffda2 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822619806 +0000 UTC m=+133.566568337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key") pod "service-ca-74545575db-llz79" (UID: "d92ccf27-d679-4304-98b0-a6e74c7ffda2") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322654 5120 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322682 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert podName:e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822674937 +0000 UTC m=+133.566623468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert") pod "service-ca-operator-5b9c976747-7ghwq" (UID: "e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322716 5120 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322748 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics podName:17d1692e-e64c-415e-98c6-fc0e5c799fe0 nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822739649 +0000 UTC m=+133.566688000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-dpf6p" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322767 5120 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.322796 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert podName:91b3eb8a-7090-484d-ae8f-8bbe990bce4d nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.822789 +0000 UTC m=+133.566737521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert") pod "catalog-operator-75ff9f647d-fscmd" (UID: "91b3eb8a-7090-484d-ae8f-8bbe990bce4d") : failed to sync secret cache: timed out waiting for the condition Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.332496 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.351740 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.354088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.354495 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.854467545 +0000 UTC m=+133.598415886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.354780 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.355148 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.855135442 +0000 UTC m=+133.599083783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.383322 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.391346 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.410109 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.430767 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.456977 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.457326 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.957301606 +0000 UTC m=+133.701249937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.457762 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.457977 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.458483 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:58.958465674 +0000 UTC m=+133.702414015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.470542 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.491461 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.511142 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.531340 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.551921 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.561069 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.561306 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.061281823 +0000 UTC m=+133.805230164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.561715 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.562450 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.062433712 +0000 UTC m=+133.806382053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.570731 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.590118 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.611635 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.631245 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.651005 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.663778 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.664813 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.16479702 +0000 UTC m=+133.908745361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.671276 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.690985 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.710643 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.734469 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.751321 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.753647 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" event={"ID":"dfeef834-363c-4dff-a170-acd203607c65","Type":"ContainerStarted","Data":"dd8eace28cff86a1b5496de821e5744b107cd43f9a01079db5e4df31ce5d6895"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.753681 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" event={"ID":"dfeef834-363c-4dff-a170-acd203607c65","Type":"ContainerStarted","Data":"d55d4ebbbbf7c389d9c0dd05f0fb2c775150191738bcc3210391f85f462ace3f"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.753691 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" event={"ID":"dfeef834-363c-4dff-a170-acd203607c65","Type":"ContainerStarted","Data":"7e2348274672d48c92c39196e7e9a5af45bc6c0506c6cf5cb0e605cb31232ff2"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.755570 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" event={"ID":"ae478ef7-56ef-496c-b99c-4d952d5617b0","Type":"ContainerStarted","Data":"c473ccb128a241b291a7ddb1089097c227250278ca512ecd15cd4815e9a53b01"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.755692 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" event={"ID":"ae478ef7-56ef-496c-b99c-4d952d5617b0","Type":"ContainerStarted","Data":"1adb53aebc07578df57b6401d6164d8a7fb8bc50b6b3052e45f4ec3290b24031"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.755994 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" event={"ID":"ba096274-efe0-462b-9a53-89e321166944","Type":"ContainerStarted","Data":"dba09e04c0563f201b249c43f74da69960d96e49567ac521c4bb56d4526fe03e"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757170 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" event={"ID":"ba096274-efe0-462b-9a53-89e321166944","Type":"ContainerStarted","Data":"fc111f594610879311fb90d1c6ebb61327f8c1f99aa7e396c5e98c2939ad025c"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757547 5120 patch_prober.go:28] interesting pod/console-operator-67c89758df-6q5kp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.757608 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" podUID="ae478ef7-56ef-496c-b99c-4d952d5617b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.758519 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerStarted","Data":"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.758555 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerStarted","Data":"b06d71ff154da6cdba043abe6374515e955691a895c872e8885cdaf9984417d0"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.759232 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.762835 5120 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-xw8v9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.762969 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.763762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" event={"ID":"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9","Type":"ContainerStarted","Data":"ca04f4a8424c009f0b5737addb245fb47c68c1783cf20d8cb4bda69cdfb35adf"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.763902 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" event={"ID":"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9","Type":"ContainerStarted","Data":"149cc2255fd754dd34cb173207f138a4474b1c8f1b9e6893fdd2d69e3a0ba5c1"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.764030 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" event={"ID":"eb2cf2b6-ca7b-4f75-ad62-7bb5e85aeea9","Type":"ContainerStarted","Data":"e629eb6fff86a7ada5fe848ea1e2de6ee63c79dee6b4bccd40b363aa7c4e4435"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766045 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766284 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" event={"ID":"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b","Type":"ContainerStarted","Data":"de7628be39dffdcd6efbafc8c4d9386bd98645efcf19aa6bd627b796e8b44088"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766411 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" event={"ID":"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b","Type":"ContainerStarted","Data":"83bf2b2c087874c8b93a7989b3e650319643ff762a5db8cac16f527553206986"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.766492 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" event={"ID":"e2d50ff8-e389-4ca8-8a4f-6987db07ea3b","Type":"ContainerStarted","Data":"9501ed0b60273f6fdf8c1d12900a468e69546af222ea38f8c171b08ca38279f5"} Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.766358 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.266346269 +0000 UTC m=+134.010294610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.768303 5120 generic.go:358] "Generic (PLEG): container finished" podID="c07a3946-e1f2-458f-bc29-15741de2605c" containerID="82d3cdaaa62f04c2d1c1cbddb8cc1cd9d718790e43e9eaf4f4bd31f2260a467a" exitCode=0 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.768448 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" event={"ID":"c07a3946-e1f2-458f-bc29-15741de2605c","Type":"ContainerDied","Data":"82d3cdaaa62f04c2d1c1cbddb8cc1cd9d718790e43e9eaf4f4bd31f2260a467a"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.768537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" event={"ID":"c07a3946-e1f2-458f-bc29-15741de2605c","Type":"ContainerStarted","Data":"b2d3108e925e8233ca6cc953c1c6d7791d039bcdac43efcbb43a12d771162c73"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770261 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerStarted","Data":"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770301 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerStarted","Data":"2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770536 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.770989 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.772698 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" event={"ID":"0b427d7e-8e8a-4486-831a-aa6cc98f1b39","Type":"ContainerStarted","Data":"21082b3c6a22b6745a3993ab12e4b693bebdb61f07bc987c940b0fa236b6c615"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.772740 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" event={"ID":"0b427d7e-8e8a-4486-831a-aa6cc98f1b39","Type":"ContainerStarted","Data":"f50383757d994159e1aa2817319aba1bd5941fa2f72330ed06fb0eb17d2d34a0"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.774714 5120 generic.go:358] "Generic (PLEG): container finished" podID="fd113660-b734-4d86-be8d-b28c5e9a328f" containerID="d911ab14e3f566f46e176f13f90317966dca1a8709ed763ed6fe76b67c93e320" exitCode=0 Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.774764 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerDied","Data":"d911ab14e3f566f46e176f13f90317966dca1a8709ed763ed6fe76b67c93e320"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.774782 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerStarted","Data":"8929fdcecaae454196a9a31857dbede8e413a5afe9ac0bd3b4f3d7558cd1837b"} Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.790386 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.811081 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.834537 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.851795 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.867947 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.868106 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.368080251 +0000 UTC m=+134.112028592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.868939 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872001 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872106 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872222 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872264 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872418 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872468 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872575 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872725 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.872767 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873161 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873208 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873235 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873328 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873358 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873480 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873526 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873574 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.873686 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.874628 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.882134 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-config\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.884586 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-cabundle\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.885401 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.888057 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.889316 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-serving-cert\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.890607 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-apiservice-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.891260 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b273aff-e733-49a9-a191-88b0380500eb-webhook-cert\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.891361 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cc31b0e-b225-470f-870b-f89666eae47b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.892844 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.894467 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d92ccf27-d679-4304-98b0-a6e74c7ffda2-signing-key\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.895354 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.39533884 +0000 UTC m=+134.139287181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.896364 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.901611 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7fc5383-db19-483a-afb9-23d3f8065a64-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.901711 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.903825 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-srv-cert\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.905710 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da2b1465-54c1-4a7d-8cb6-755b28e448b8-webhook-certs\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.911291 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.917997 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.941631 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.941711 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-cert\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.951255 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.970123 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 11:49:58 crc kubenswrapper[5120]: I0122 11:49:58.975978 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:58 crc kubenswrapper[5120]: E0122 11:49:58.976424 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.476406566 +0000 UTC m=+134.220354907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:58.999575 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.011613 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.033172 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.069040 5120 request.go:752] "Waited before sending request" delay="1.968414834s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.078164 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.078586 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.57857469 +0000 UTC m=+134.322523031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.098798 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2hf\" (UniqueName: \"kubernetes.io/projected/42d89f76-66b8-4ffa-a63e-13582811b819-kube-api-access-9g2hf\") pod \"openshift-apiserver-operator-846cbfc458-6q7wl\" (UID: \"42d89f76-66b8-4ffa-a63e-13582811b819\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.113560 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld95q\" (UniqueName: \"kubernetes.io/projected/ea345128-daaf-464a-b774-8f8cf4c34aa5-kube-api-access-ld95q\") pod \"openshift-config-operator-5777786469-rkbh2\" (UID: \"ea345128-daaf-464a-b774-8f8cf4c34aa5\") " pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.128326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rdkp\" (UniqueName: \"kubernetes.io/projected/699a5d41-d0b5-4d88-9448-4b3bad2cc424-kube-api-access-5rdkp\") pod \"dns-operator-799b87ffcd-p98m2\" (UID: \"699a5d41-d0b5-4d88-9448-4b3bad2cc424\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.130021 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.133189 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2380d23f-8320-4c77-9936-215ff48a32c8-config-volume\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.142183 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kw26\" (UniqueName: \"kubernetes.io/projected/f65e3321-2af5-4ab7-8765-36af9f3ecc9e-kube-api-access-2kw26\") pod \"etcd-operator-69b85846b6-r4999\" (UID: \"f65e3321-2af5-4ab7-8765-36af9f3ecc9e\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.145559 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.150388 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.172892 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.179765 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.180355 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.680338933 +0000 UTC m=+134.424287274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.192202 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.192904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2380d23f-8320-4c77-9936-215ff48a32c8-metrics-tls\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.213412 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26kbp\" (UniqueName: \"kubernetes.io/projected/9af7812b-a785-44ec-a8eb-eb72b9958b01-kube-api-access-26kbp\") pod \"authentication-operator-7f5c659b84-v89hk\" (UID: \"9af7812b-a785-44ec-a8eb-eb72b9958b01\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.231859 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"oauth-openshift-66458b6674-25dsq\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.249808 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfb4z\" (UniqueName: \"kubernetes.io/projected/a1372d1c-9557-4da9-b571-ea78602f491f-kube-api-access-mfb4z\") pod \"downloads-747b44746d-btnnz\" (UID: \"a1372d1c-9557-4da9-b571-ea78602f491f\") " pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.250660 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.270750 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.275539 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-node-bootstrap-token\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.282743 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.285520 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.78549927 +0000 UTC m=+134.529447611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.290708 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.309861 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a909382a-a9be-43ea-b525-c382d3d7dac9-certs\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.341287 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.370343 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.370944 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.384683 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.386867 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.388037 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.888016243 +0000 UTC m=+134.631964584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.396373 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.397228 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.418466 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bdf4dfdb-f473-480e-ae44-570e99cf695f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7nx8w\" (UID: \"bdf4dfdb-f473-480e-ae44-570e99cf695f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.437394 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"collect-profiles-29484705-g489w\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.471610 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdgb\" (UniqueName: \"kubernetes.io/projected/62b5ce4a-8844-4e22-8bf1-f1f89efa16f9-kube-api-access-2hdgb\") pod \"kube-storage-version-migrator-operator-565b79b866-w9nlv\" (UID: \"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.478256 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.479053 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rv6\" (UniqueName: \"kubernetes.io/projected/f7fc5383-db19-483a-afb9-23d3f8065a64-kube-api-access-n2rv6\") pod \"machine-config-controller-f9cdd68f7-kprrg\" (UID: \"f7fc5383-db19-483a-afb9-23d3f8065a64\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.479544 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw44v\" (UniqueName: \"kubernetes.io/projected/3cc31b0e-b225-470f-870b-f89666eae47b-kube-api-access-gw44v\") pod \"control-plane-machine-set-operator-75ffdb6fcd-fhxb8\" (UID: \"3cc31b0e-b225-470f-870b-f89666eae47b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.479899 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.480136 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.489447 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.490108 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:49:59.990095663 +0000 UTC m=+134.734044004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.499552 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qm6\" (UniqueName: \"kubernetes.io/projected/da2b1465-54c1-4a7d-8cb6-755b28e448b8-kube-api-access-s7qm6\") pod \"multus-admission-controller-69db94689b-dp8rm\" (UID: \"da2b1465-54c1-4a7d-8cb6-755b28e448b8\") " pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.535386 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fbaf6c98-c3db-488e-878a-d0b1b9779ea2-kube-api-access\") pod \"kube-apiserver-operator-575994946d-j9r4l\" (UID: \"fbaf6c98-c3db-488e-878a-d0b1b9779ea2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.553991 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcskr\" (UniqueName: \"kubernetes.io/projected/e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7-kube-api-access-jcskr\") pod \"service-ca-operator-5b9c976747-7ghwq\" (UID: \"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.554888 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.559484 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.576398 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqccv\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-kube-api-access-pqccv\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.582931 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.582931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a52cc8b-fb68-4b1d-b91d-576f5ff59968-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8p7x7\" (UID: \"9a52cc8b-fb68-4b1d-b91d-576f5ff59968\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.586826 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7kn\" (UniqueName: \"kubernetes.io/projected/efec95f9-a526-41f9-bd7c-0d1bd2505eda-kube-api-access-rc7kn\") pod \"console-64d44f6ddf-7q8jr\" (UID: \"efec95f9-a526-41f9-bd7c-0d1bd2505eda\") " pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.590549 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.591412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.591678 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.091651742 +0000 UTC m=+134.835600223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.592503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.592832 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.092815841 +0000 UTC m=+134.836764182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.612812 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmv2\" (UniqueName: \"kubernetes.io/projected/52bf18ab-85c0-49e5-8b9d-9cb67ec54297-kube-api-access-ljmv2\") pod \"package-server-manager-77f986bd66-9hjpw\" (UID: \"52bf18ab-85c0-49e5-8b9d-9cb67ec54297\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.613070 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.615971 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.621484 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"marketplace-operator-547dbd544d-dpf6p\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.624123 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.643108 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x68s\" (UniqueName: \"kubernetes.io/projected/d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2-kube-api-access-9x68s\") pod \"csi-hostpathplugin-lsqq6\" (UID: \"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2\") " pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.655564 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"cni-sysctl-allowlist-ds-mddkn\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.668516 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.685692 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcrb\" (UniqueName: \"kubernetes.io/projected/6edfa4a4-fdb6-420f-ba3b-d984c4784817-kube-api-access-hxcrb\") pod \"olm-operator-5cdf44d969-x78dg\" (UID: \"6edfa4a4-fdb6-420f-ba3b-d984c4784817\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.695382 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/061945e1-c5cb-4451-94ff-0fd4a53b4901-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-nzfjl\" (UID: \"061945e1-c5cb-4451-94ff-0fd4a53b4901\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.701665 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.702071 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.202053548 +0000 UTC m=+134.946001889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.729336 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6g7g\" (UniqueName: \"kubernetes.io/projected/7b273aff-e733-49a9-a191-88b0380500eb-kube-api-access-k6g7g\") pod \"packageserver-7d4fc7d867-bbphb\" (UID: \"7b273aff-e733-49a9-a191-88b0380500eb\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.742698 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2cpr\" (UniqueName: \"kubernetes.io/projected/d92ccf27-d679-4304-98b0-a6e74c7ffda2-kube-api-access-c2cpr\") pod \"service-ca-74545575db-llz79\" (UID: \"d92ccf27-d679-4304-98b0-a6e74c7ffda2\") " pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.770284 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-r4999"] Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.775577 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bszmq\" (UniqueName: \"kubernetes.io/projected/91b3eb8a-7090-484d-ae8f-8bbe990bce4d-kube-api-access-bszmq\") pod \"catalog-operator-75ff9f647d-fscmd\" (UID: \"91b3eb8a-7090-484d-ae8f-8bbe990bce4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.782575 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.792671 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8gv\" (UniqueName: \"kubernetes.io/projected/d245a73a-a6cb-488c-91aa-8b3020511b47-kube-api-access-5k8gv\") pod \"migrator-866fcbc849-dc6zt\" (UID: \"d245a73a-a6cb-488c-91aa-8b3020511b47\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.800202 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.802937 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.803278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.303264712 +0000 UTC m=+135.047213043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.806192 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkxp\" (UniqueName: \"kubernetes.io/projected/a909382a-a9be-43ea-b525-c382d3d7dac9-kube-api-access-8rkxp\") pod \"machine-config-server-lfqzp\" (UID: \"a909382a-a9be-43ea-b525-c382d3d7dac9\") " pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.812147 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtrk4\" (UniqueName: \"kubernetes.io/projected/5e1bcfb8-8fae-4947-a078-c38b69596998-kube-api-access-rtrk4\") pod \"router-default-68cf44c8b8-7x2rm\" (UID: \"5e1bcfb8-8fae-4947-a078-c38b69596998\") " pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.814258 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.814750 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.823467 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.832840 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.839846 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp9hf\" (UniqueName: \"kubernetes.io/projected/503a8f02-4faa-4c71-a07b-e5cf7e21fd01-kube-api-access-fp9hf\") pod \"ingress-canary-8wqc7\" (UID: \"503a8f02-4faa-4c71-a07b-e5cf7e21fd01\") " pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.854262 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.855551 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" event={"ID":"c07a3946-e1f2-458f-bc29-15741de2605c","Type":"ContainerStarted","Data":"b58f46ca694e1c09b1d5aa117e6c8335287b9d84fd676c34d6ac18b2a7745319"} Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.857486 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46262: no serving certificate available for the kubelet" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.871326 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.874863 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.875312 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mljlf\" (UniqueName: \"kubernetes.io/projected/2380d23f-8320-4c77-9936-215ff48a32c8-kube-api-access-mljlf\") pod \"dns-default-d4ftw\" (UID: \"2380d23f-8320-4c77-9936-215ff48a32c8\") " pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.879816 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhpd9\" (UniqueName: \"kubernetes.io/projected/c5f50cf9-ffda-418c-a80d-9612ce61d429-kube-api-access-dhpd9\") pod \"machine-config-operator-67c9d58cbb-2czqg\" (UID: \"c5f50cf9-ffda-418c-a80d-9612ce61d429\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.882068 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerStarted","Data":"e9d915c14e1cb702ed0ee52af36016ac13bd762c7ead7a4097c2ee644b3c21d3"} Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.898306 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.903513 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.904089 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:49:59 crc kubenswrapper[5120]: E0122 11:49:59.904564 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.404544184 +0000 UTC m=+135.148492525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.933018 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-llz79" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.938548 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.945404 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8wqc7" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.949254 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46264: no serving certificate available for the kubelet" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.977279 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:49:59 crc kubenswrapper[5120]: I0122 11:49:59.988473 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lfqzp" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.009272 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.014821 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.514805493 +0000 UTC m=+135.258753834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.050370 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46280: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: W0122 11:50:00.055197 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda909382a_a9be_43ea_b525_c382d3d7dac9.slice/crio-3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc WatchSource:0}: Error finding container 3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc: Status 404 returned error can't find the container with id 3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.110388 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.110983 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.610946271 +0000 UTC m=+135.354894612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.139247 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.163510 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46292: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.213510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.214023 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.714007186 +0000 UTC m=+135.457955527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.264238 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46306: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.315553 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.315981 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.815940234 +0000 UTC m=+135.559888565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.344206 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.363735 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46318: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.401031 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-mngf2" podStartSLOduration=115.401015453 podStartE2EDuration="1m55.401015453s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.40085671 +0000 UTC m=+135.144805051" watchObservedRunningTime="2026-01-22 11:50:00.401015453 +0000 UTC m=+135.144963794" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.417995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.418333 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:00.918320172 +0000 UTC m=+135.662268513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.464591 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46330: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.519751 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.520471 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.020454095 +0000 UTC m=+135.764402426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.560143 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46332: no serving certificate available for the kubelet" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.621261 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.622474 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.122457154 +0000 UTC m=+135.866405495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.724715 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.724926 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.224909014 +0000 UTC m=+135.968857345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.725036 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.725330 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.225323705 +0000 UTC m=+135.969272046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.789551 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" podStartSLOduration=115.789533889 podStartE2EDuration="1m55.789533889s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.788601126 +0000 UTC m=+135.532549467" watchObservedRunningTime="2026-01-22 11:50:00.789533889 +0000 UTC m=+135.533482230" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.828151 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.828297 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.328276437 +0000 UTC m=+136.072224788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.828374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.828701 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.328688776 +0000 UTC m=+136.072637117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.894002 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerStarted","Data":"224c53d4c2e0d2802958ae5a4e8f3773f21300049c7b7357bf9e459ec82f1d55"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.895064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" event={"ID":"5e1bcfb8-8fae-4947-a078-c38b69596998","Type":"ContainerStarted","Data":"18f6f5bb5a596230152d3a29c830aed3a2a10fa9a9599f4fb0775380fc6ab880"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.895089 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" event={"ID":"5e1bcfb8-8fae-4947-a078-c38b69596998","Type":"ContainerStarted","Data":"63e21539b78c3caacad2be48bbce7c838a156a1b2407e79bda3f69a577565072"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.897225 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" event={"ID":"fd113660-b734-4d86-be8d-b28c5e9a328f","Type":"ContainerStarted","Data":"7b5b6871c35c27b98c915aec1ce5f2c586f492a1a2f065cbc21d34248a426f49"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.898977 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lfqzp" event={"ID":"a909382a-a9be-43ea-b525-c382d3d7dac9","Type":"ContainerStarted","Data":"e3aaddb1a50b992e545ec29f73567401c0118360b105c36c258614e980dd595d"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.899028 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lfqzp" event={"ID":"a909382a-a9be-43ea-b525-c382d3d7dac9","Type":"ContainerStarted","Data":"3a3f0a94381e44593949ef9298feb4483d28a29c23231b93966513ecd84ff3fc"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.900930 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" event={"ID":"f65e3321-2af5-4ab7-8765-36af9f3ecc9e","Type":"ContainerStarted","Data":"d7571b5a6c094e5317490ea9142d0e3f44894b3c88275a7c50d443f18319ed06"} Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.932033 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.932150 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.43212354 +0000 UTC m=+136.176071881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.932777 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:00 crc kubenswrapper[5120]: E0122 11:50:00.933108 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.433099494 +0000 UTC m=+136.177047835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.992107 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-ll2j2" podStartSLOduration=116.992089032 podStartE2EDuration="1m56.992089032s" podCreationTimestamp="2026-01-22 11:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.99076074 +0000 UTC m=+135.734709081" watchObservedRunningTime="2026-01-22 11:50:00.992089032 +0000 UTC m=+135.736037363" Jan 22 11:50:00 crc kubenswrapper[5120]: I0122 11:50:00.993615 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" podStartSLOduration=115.993606649 podStartE2EDuration="1m55.993606649s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:00.96057614 +0000 UTC m=+135.704524481" watchObservedRunningTime="2026-01-22 11:50:00.993606649 +0000 UTC m=+135.737554990" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.033467 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.034990 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.534974351 +0000 UTC m=+136.278922692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.139445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.139932 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.639917241 +0000 UTC m=+136.383865582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.240798 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.241325 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.741307195 +0000 UTC m=+136.485255536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.257693 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46344: no serving certificate available for the kubelet" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.341436 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-bg8p2" podStartSLOduration=116.341415469 podStartE2EDuration="1m56.341415469s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.313027972 +0000 UTC m=+136.056976313" watchObservedRunningTime="2026-01-22 11:50:01.341415469 +0000 UTC m=+136.085363810" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.343605 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.344350 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.84433781 +0000 UTC m=+136.588286151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.434453 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" podStartSLOduration=115.434437041 podStartE2EDuration="1m55.434437041s" podCreationTimestamp="2026-01-22 11:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.432457724 +0000 UTC m=+136.176406065" watchObservedRunningTime="2026-01-22 11:50:01.434437041 +0000 UTC m=+136.178385432" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.444578 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.445306 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:01.945288164 +0000 UTC m=+136.689236505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.546995 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.547480 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.047464567 +0000 UTC m=+136.791412908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.604581 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-6q5kp" podStartSLOduration=116.60456653 podStartE2EDuration="1m56.60456653s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.60375753 +0000 UTC m=+136.347705861" watchObservedRunningTime="2026-01-22 11:50:01.60456653 +0000 UTC m=+136.348514871" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.660780 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.661216 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.161200111 +0000 UTC m=+136.905148442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.710566 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-p98m2"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.740356 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-btnnz"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.753920 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dp8rm"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.754169 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7smqb" podStartSLOduration=117.754159601 podStartE2EDuration="1m57.754159601s" podCreationTimestamp="2026-01-22 11:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.744419426 +0000 UTC m=+136.488367767" watchObservedRunningTime="2026-01-22 11:50:01.754159601 +0000 UTC m=+136.498107942" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.763972 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.764334 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.264317847 +0000 UTC m=+137.008266188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.797517 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-x2rhp" podStartSLOduration=116.79750075 podStartE2EDuration="1m56.79750075s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.784342982 +0000 UTC m=+136.528291323" watchObservedRunningTime="2026-01-22 11:50:01.79750075 +0000 UTC m=+136.541449091" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.798041 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.835550 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.843328 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:01 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:01 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:01 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.843374 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.866816 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.867325 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.36729341 +0000 UTC m=+137.111241751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.918244 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" podStartSLOduration=116.918219733 podStartE2EDuration="1m56.918219733s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.917658739 +0000 UTC m=+136.661607100" watchObservedRunningTime="2026-01-22 11:50:01.918219733 +0000 UTC m=+136.662168074" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.941643 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerStarted","Data":"743767c75fc8dbe2e21f07b80773fcf606c65fb144c9e4f33a6d600d11d2e9c8"} Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.955123 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lfqzp" podStartSLOduration=5.955094155 podStartE2EDuration="5.955094155s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.952878281 +0000 UTC m=+136.696826622" watchObservedRunningTime="2026-01-22 11:50:01.955094155 +0000 UTC m=+136.699042496" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.965918 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-btnnz" event={"ID":"a1372d1c-9557-4da9-b571-ea78602f491f","Type":"ContainerStarted","Data":"b4e65ce889ae38895c08c8b3c073e04a82886add3f94b20866369d763a5ff820"} Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.975523 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:01 crc kubenswrapper[5120]: E0122 11:50:01.975927 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.47591302 +0000 UTC m=+137.219861361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.980346 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" event={"ID":"f65e3321-2af5-4ab7-8765-36af9f3ecc9e","Type":"ContainerStarted","Data":"4b8f993793fd8643e52453a201a5cc1abefa2b347e4cfc0025261d8f963f557e"} Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.989586 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podStartSLOduration=116.98956595 podStartE2EDuration="1m56.98956595s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:01.98955321 +0000 UTC m=+136.733501561" watchObservedRunningTime="2026-01-22 11:50:01.98956595 +0000 UTC m=+136.733514291" Jan 22 11:50:01 crc kubenswrapper[5120]: I0122 11:50:01.992235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" event={"ID":"699a5d41-d0b5-4d88-9448-4b3bad2cc424","Type":"ContainerStarted","Data":"bb52a0103c69a7acc2f01e1cf2c2aa3da57f29f3ee3ea7dad4c6521e26a391f2"} Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.013860 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" event={"ID":"da2b1465-54c1-4a7d-8cb6-755b28e448b8","Type":"ContainerStarted","Data":"c86e2026e3173b8f3a00b7ae25f6d3d62691c631cdb81827a9d224816c8b0cc0"} Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.017890 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerStarted","Data":"b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f"} Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.017925 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.031480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.049206 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-r4999" podStartSLOduration=117.049182783 podStartE2EDuration="1m57.049182783s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:02.04158865 +0000 UTC m=+136.785536991" watchObservedRunningTime="2026-01-22 11:50:02.049182783 +0000 UTC m=+136.793131124" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.059742 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.067827 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-rkbh2"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.069730 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.075796 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podStartSLOduration=6.075770997 podStartE2EDuration="6.075770997s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:02.06514954 +0000 UTC m=+136.809097881" watchObservedRunningTime="2026-01-22 11:50:02.075770997 +0000 UTC m=+136.819719338" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.076828 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.077025 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.577005847 +0000 UTC m=+137.320954188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.077548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.085084 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.585061752 +0000 UTC m=+137.329010093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.089469 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.139527 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.149080 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv"] Jan 22 11:50:02 crc kubenswrapper[5120]: W0122 11:50:02.156229 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d89f76_66b8_4ffa_a63e_13582811b819.slice/crio-41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281 WatchSource:0}: Error finding container 41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281: Status 404 returned error can't find the container with id 41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281 Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.176673 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7q8jr"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.177843 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.178008 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.179546 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.679528548 +0000 UTC m=+137.423476879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.181011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w"] Jan 22 11:50:02 crc kubenswrapper[5120]: W0122 11:50:02.189325 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefec95f9_a526_41f9_bd7c_0d1bd2505eda.slice/crio-5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29 WatchSource:0}: Error finding container 5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29: Status 404 returned error can't find the container with id 5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29 Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.233573 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.285052 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.285526 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.785509474 +0000 UTC m=+137.529457815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.387639 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.388078 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.888061287 +0000 UTC m=+137.632009628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.424166 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.424215 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.436061 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.469066 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-llz79"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.480583 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.497332 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8wqc7"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.497716 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.498123 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:02.998108251 +0000 UTC m=+137.742056592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.506679 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.570714 5120 ???:1] "http: TLS handshake error from 192.168.126.11:46356: no serving certificate available for the kubelet" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.598629 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.599125 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.099106806 +0000 UTC m=+137.843055137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.606833 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.612384 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.614627 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.616140 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.639353 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.643596 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.644412 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.650779 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d4ftw"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.655338 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lsqq6"] Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.704559 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.705053 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.205037761 +0000 UTC m=+137.948986102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.772522 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.773390 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.792930 5120 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-xmvfk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]log ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]etcd ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/max-in-flight-filter ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 22 11:50:02 crc kubenswrapper[5120]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/project.openshift.io-projectcache ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-startinformers ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 22 11:50:02 crc kubenswrapper[5120]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 11:50:02 crc kubenswrapper[5120]: livez check failed Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.793027 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" podUID="fd113660-b734-4d86-be8d-b28c5e9a328f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.805931 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.806088 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.306055736 +0000 UTC m=+138.050004077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.807614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.808499 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.308480004 +0000 UTC m=+138.052428345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.811277 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.847212 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:02 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:02 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.847305 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.916808 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.917014 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.416979641 +0000 UTC m=+138.160927992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.917442 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:02 crc kubenswrapper[5120]: E0122 11:50:02.919622 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.419610375 +0000 UTC m=+138.163558716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:02 crc kubenswrapper[5120]: I0122 11:50:02.982740 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.018628 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.019730 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.519684567 +0000 UTC m=+138.263632908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.035460 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" event={"ID":"c5f50cf9-ffda-418c-a80d-9612ce61d429","Type":"ContainerStarted","Data":"c2bce4c1bf92e03ad37ebd297aba1ae5b8d55a150e333ac2467aacf92a710870"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.043065 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerStarted","Data":"639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.043143 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerStarted","Data":"d4824bab9e53014c1adf60d5f2c167746888e2b25de0388cf1bcad99ffd70500"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.067869 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" event={"ID":"da2b1465-54c1-4a7d-8cb6-755b28e448b8","Type":"ContainerStarted","Data":"befaa8b061afb24db5ded6203043cc4365244227691b27affa20c097bdbf6a0d"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.069673 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" event={"ID":"52bf18ab-85c0-49e5-8b9d-9cb67ec54297","Type":"ContainerStarted","Data":"899a9550356d2757ef7a845a346e5ddf4b8ba184cc94e439cfad04ee675ac0e1"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.083532 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" event={"ID":"9af7812b-a785-44ec-a8eb-eb72b9958b01","Type":"ContainerStarted","Data":"ef0d63016b930a7d2d0bf191f98942efe2f437bf1f68a1c9c908f87a19a250f1"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.083599 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" event={"ID":"9af7812b-a785-44ec-a8eb-eb72b9958b01","Type":"ContainerStarted","Data":"3094d7e9ba17c9ab1583e83a2d32ca60d259e90dbacafccf4452636bb2978057"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.092656 5120 generic.go:358] "Generic (PLEG): container finished" podID="ea345128-daaf-464a-b774-8f8cf4c34aa5" containerID="fbe03ce179d82f4a2ede6b5469bb49d324c7240b14ddaaa5a1926a324d78ddab" exitCode=0 Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.092786 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" event={"ID":"ea345128-daaf-464a-b774-8f8cf4c34aa5","Type":"ContainerDied","Data":"fbe03ce179d82f4a2ede6b5469bb49d324c7240b14ddaaa5a1926a324d78ddab"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.092828 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" event={"ID":"ea345128-daaf-464a-b774-8f8cf4c34aa5","Type":"ContainerStarted","Data":"94a95916ba6c20e7e265226f4475b08186c01182f390e7a7c0a101de329d67d3"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.099832 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" event={"ID":"fbaf6c98-c3db-488e-878a-d0b1b9779ea2","Type":"ContainerStarted","Data":"d10a8dfcec5daab3a9a488965088ed110978b748a0ec0c4190c53ef88864734f"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.103102 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" event={"ID":"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9","Type":"ContainerStarted","Data":"a9ea24eee9113231642066a13a4fd99a97b50d921271e0ef48a08228316952a0"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.107812 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-llz79" event={"ID":"d92ccf27-d679-4304-98b0-a6e74c7ffda2","Type":"ContainerStarted","Data":"434e986462f099155375feceb31a8b8f3026fc7d15e0c0cbc06b958683aba5e6"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.109085 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" event={"ID":"7b273aff-e733-49a9-a191-88b0380500eb","Type":"ContainerStarted","Data":"b0232b053903bec4705b39998b2cc0e0f74928cb4c78d1d52b9b5fbd6c76a99d"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.109808 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" event={"ID":"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7","Type":"ContainerStarted","Data":"febcec9f14837798d972abe684ecefc5bf07c847f5fd7c83053c1150ee8cb9b0"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.115470 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" event={"ID":"d245a73a-a6cb-488c-91aa-8b3020511b47","Type":"ContainerStarted","Data":"127cb8b8804c604feb73da7c8989f3e988105a474877d2882d0b0c96d987f1bc"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.122643 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" podStartSLOduration=118.12262333 podStartE2EDuration="1m58.12262333s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.085089571 +0000 UTC m=+137.829037922" watchObservedRunningTime="2026-01-22 11:50:03.12262333 +0000 UTC m=+137.866571671" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.124222 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v89hk" podStartSLOduration=118.124213198 podStartE2EDuration="1m58.124213198s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.121582374 +0000 UTC m=+137.865530715" watchObservedRunningTime="2026-01-22 11:50:03.124213198 +0000 UTC m=+137.868161539" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.125931 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.126351 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.626336709 +0000 UTC m=+138.370285050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.138847 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8wqc7" event={"ID":"503a8f02-4faa-4c71-a07b-e5cf7e21fd01","Type":"ContainerStarted","Data":"d8528f1b4f5cb882b0d0cbffc9ef67abd5f66a102718b79f4a9a880b70a1c016"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.148173 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" event={"ID":"91b3eb8a-7090-484d-ae8f-8bbe990bce4d","Type":"ContainerStarted","Data":"526e958b4841f4433b7d50fc908effa496406b6b7a32311ca495d3654eb161eb"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.193498 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerStarted","Data":"5b1a0b828474bfc01c65e742389b89ec9558f81701ba98898857a82e2cc1733f"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.203464 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" event={"ID":"061945e1-c5cb-4451-94ff-0fd4a53b4901","Type":"ContainerStarted","Data":"e4eebd2729568d2d066a7f64ceb7ea7e6dd372828feeab67282c37454a5292ea"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.211181 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"ad2b08c045da56dd507b1d8f148e3fbb0995b2db33dc484cbb8c09b24c0839c1"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.228470 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.229924 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.729893976 +0000 UTC m=+138.473842317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.236004 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" event={"ID":"42d89f76-66b8-4ffa-a63e-13582811b819","Type":"ContainerStarted","Data":"6a8e8302aee96bee35bf3c1544338cb73bc120649ab799b150033bc8dcb51d6e"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.236047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" event={"ID":"42d89f76-66b8-4ffa-a63e-13582811b819","Type":"ContainerStarted","Data":"41a4bdc58f120bfa8b07e9a9fe672196e67770d90944de431f89c99808cd7281"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.238650 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" event={"ID":"3cc31b0e-b225-470f-870b-f89666eae47b","Type":"ContainerStarted","Data":"d8ea079f89246bd1fbb34ab5b932eccfa08a313f2ffab823a3e21c5008b83fdc"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.252870 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d4ftw" event={"ID":"2380d23f-8320-4c77-9936-215ff48a32c8","Type":"ContainerStarted","Data":"ba91a3a11694780ec39b23d1182734e9b479730e20efef67a891f8fe6bb0c2d8"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.264734 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-6q7wl" podStartSLOduration=118.264719219 podStartE2EDuration="1m58.264719219s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.262865444 +0000 UTC m=+138.006813785" watchObservedRunningTime="2026-01-22 11:50:03.264719219 +0000 UTC m=+138.008667550" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.297640 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-btnnz" event={"ID":"a1372d1c-9557-4da9-b571-ea78602f491f","Type":"ContainerStarted","Data":"16c718d9c5d36b12b8d36fa5390982626b48bb4bc88b5cd99d41f35d17e69f4d"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.303044 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" event={"ID":"6edfa4a4-fdb6-420f-ba3b-d984c4784817","Type":"ContainerStarted","Data":"a442d4f6a16181914e40383f9e4e35d26c50b03d0476836bc68be6228e9550ed"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.304396 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.305159 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.305210 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.314445 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7q8jr" event={"ID":"efec95f9-a526-41f9-bd7c-0d1bd2505eda","Type":"ContainerStarted","Data":"007bbe8272bdf0401f433e76998ec3268713a17ef751ea13c15be6c502ee1eeb"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.314483 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7q8jr" event={"ID":"efec95f9-a526-41f9-bd7c-0d1bd2505eda","Type":"ContainerStarted","Data":"5e27aeaca4b8c9c6f21fbb8d2cb7043b2120f5c129ce2e0ca9f03a7b432feb29"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.330898 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.331114 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" event={"ID":"bdf4dfdb-f473-480e-ae44-570e99cf695f","Type":"ContainerStarted","Data":"f68de55ac52c6339e204f32d0748489be6121e2e94c83fecb6bb5d3c34732042"} Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.332238 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.832221504 +0000 UTC m=+138.576170025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.343093 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" event={"ID":"f7fc5383-db19-483a-afb9-23d3f8065a64","Type":"ContainerStarted","Data":"7673ac3fcebabf4424353dc66f7a11e0069424a23b7d11295e18f80b61d79380"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.346905 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" event={"ID":"9a52cc8b-fb68-4b1d-b91d-576f5ff59968","Type":"ContainerStarted","Data":"28156ee8b7a7afca6d74c5992a810f7e2ffb332e667e844aa1b362e5ce4abd79"} Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.352318 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-btnnz" podStartSLOduration=118.35229856 podStartE2EDuration="1m58.35229856s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.350760433 +0000 UTC m=+138.094708774" watchObservedRunningTime="2026-01-22 11:50:03.35229856 +0000 UTC m=+138.096246901" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.357939 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-tfhpf" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.388840 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-7q8jr" podStartSLOduration=118.388821854 podStartE2EDuration="1m58.388821854s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:03.387922442 +0000 UTC m=+138.131870803" watchObservedRunningTime="2026-01-22 11:50:03.388821854 +0000 UTC m=+138.132770195" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.432456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.432640 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:03.932603053 +0000 UTC m=+138.676551404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.537309 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.540391 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.040372472 +0000 UTC m=+138.784321023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.638527 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.638811 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.138733203 +0000 UTC m=+138.882681554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.741632 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.741974 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.241946713 +0000 UTC m=+138.985895054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.837351 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:03 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:03 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:03 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.837677 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.847284 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.847488 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.347456247 +0000 UTC m=+139.091404598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.847997 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.848385 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.348371418 +0000 UTC m=+139.092319759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.949208 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.949448 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.449427035 +0000 UTC m=+139.193375366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:03 crc kubenswrapper[5120]: I0122 11:50:03.949548 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:03 crc kubenswrapper[5120]: E0122 11:50:03.949874 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.449866566 +0000 UTC m=+139.193814907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.051562 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.052086 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.55206255 +0000 UTC m=+139.296010891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.154766 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.155696 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.655679868 +0000 UTC m=+139.399628209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.261527 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.261878 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.761862349 +0000 UTC m=+139.505810690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.363142 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.363602 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.863581281 +0000 UTC m=+139.607529622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.464268 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.464630 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:04.964613547 +0000 UTC m=+139.708561888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.491279 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" event={"ID":"62b5ce4a-8844-4e22-8bf1-f1f89efa16f9","Type":"ContainerStarted","Data":"9941affb6709dc7abfb1f43681a46c60d52f6de70b929a67943cb1370c8fd373"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.506632 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-llz79" event={"ID":"d92ccf27-d679-4304-98b0-a6e74c7ffda2","Type":"ContainerStarted","Data":"8a87de682b363087100ea69063a3e51ccf7bc8d3ce129bd7b27c84641e63e998"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.531413 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" event={"ID":"7b273aff-e733-49a9-a191-88b0380500eb","Type":"ContainerStarted","Data":"5755f3f8cb76bc4e385f79b124839e866c9facfe42f0fb11fb3a801b908c03b5"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.532452 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.547098 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" event={"ID":"e18ea538-05bd-4b11-b4ac-8cb8c0c9aef7","Type":"ContainerStarted","Data":"f07625f5a57807ccf1a7ab33ca4d5e50490e44251fcb68d3675d815050d7d9c3"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.548696 5120 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-bbphb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.548789 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" podUID="7b273aff-e733-49a9-a191-88b0380500eb" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.551387 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9nlv" podStartSLOduration=119.551369857 podStartE2EDuration="1m59.551369857s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.540687339 +0000 UTC m=+139.284635680" watchObservedRunningTime="2026-01-22 11:50:04.551369857 +0000 UTC m=+139.295318198" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.575524 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.576471 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.076148127 +0000 UTC m=+139.820096468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.587163 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-llz79" podStartSLOduration=118.587138213 podStartE2EDuration="1m58.587138213s" podCreationTimestamp="2026-01-22 11:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.58532707 +0000 UTC m=+139.329275411" watchObservedRunningTime="2026-01-22 11:50:04.587138213 +0000 UTC m=+139.331086554" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.622002 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" event={"ID":"699a5d41-d0b5-4d88-9448-4b3bad2cc424","Type":"ContainerStarted","Data":"3abd778c341b350cecb59fcea2a44b380e0f62616a7511299297758fe79feb78"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.674380 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" podStartSLOduration=119.674364685 podStartE2EDuration="1m59.674364685s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.645680651 +0000 UTC m=+139.389629002" watchObservedRunningTime="2026-01-22 11:50:04.674364685 +0000 UTC m=+139.418313026" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.680862 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.682264 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.182245396 +0000 UTC m=+139.926193737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.683796 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.684194 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.184185102 +0000 UTC m=+139.928133443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.739144 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" event={"ID":"d245a73a-a6cb-488c-91aa-8b3020511b47","Type":"ContainerStarted","Data":"7d0e0a090a75fb980644d31721fb0f0e506a56b7e1cb9e461cfc1a2cca3af806"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.739226 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" event={"ID":"d245a73a-a6cb-488c-91aa-8b3020511b47","Type":"ContainerStarted","Data":"0a0e4f7efdc416eba67887ef75bdaa29e042ecbaa58c917d79cad78328309cf1"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.750106 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8wqc7" event={"ID":"503a8f02-4faa-4c71-a07b-e5cf7e21fd01","Type":"ContainerStarted","Data":"bdc3e6b1384933af8ca269d948a934cc0a4f49e72e9293fc226fb33bee4549ae"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.781923 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-7ghwq" podStartSLOduration=118.781893348 podStartE2EDuration="1m58.781893348s" podCreationTimestamp="2026-01-22 11:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.675792939 +0000 UTC m=+139.419741280" watchObservedRunningTime="2026-01-22 11:50:04.781893348 +0000 UTC m=+139.525841689" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.785527 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-dc6zt" podStartSLOduration=119.785517506 podStartE2EDuration="1m59.785517506s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.769067558 +0000 UTC m=+139.513015899" watchObservedRunningTime="2026-01-22 11:50:04.785517506 +0000 UTC m=+139.529465847" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.788105 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.788335 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.288304543 +0000 UTC m=+140.032252884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.788704 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.790634 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.290617519 +0000 UTC m=+140.034565860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.817341 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" event={"ID":"91b3eb8a-7090-484d-ae8f-8bbe990bce4d","Type":"ContainerStarted","Data":"59a3a84ef4c046e7b0cf5d93ddd7acc2d57ddc93032f97d7d703a20d097a8712"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.818932 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.820134 5120 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-fscmd container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.820212 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" podUID="91b3eb8a-7090-484d-ae8f-8bbe990bce4d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.866379 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:04 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:04 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:04 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.866492 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerStarted","Data":"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.867515 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.867795 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.869088 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" podStartSLOduration=119.869078168 podStartE2EDuration="1m59.869078168s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.868351731 +0000 UTC m=+139.612300092" watchObservedRunningTime="2026-01-22 11:50:04.869078168 +0000 UTC m=+139.613026509" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.870224 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8wqc7" podStartSLOduration=8.870220166 podStartE2EDuration="8.870220166s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.795305252 +0000 UTC m=+139.539253593" watchObservedRunningTime="2026-01-22 11:50:04.870220166 +0000 UTC m=+139.614168507" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.894622 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.895582 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.395553019 +0000 UTC m=+140.139501360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.895762 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-dpf6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.895808 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.908788 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podStartSLOduration=119.90876859 podStartE2EDuration="1m59.90876859s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:04.90875954 +0000 UTC m=+139.652707881" watchObservedRunningTime="2026-01-22 11:50:04.90876859 +0000 UTC m=+139.652716931" Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.957349 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" event={"ID":"061945e1-c5cb-4451-94ff-0fd4a53b4901","Type":"ContainerStarted","Data":"6f5cc6c8232538b29e5a9b7ed13d3006d4e09c36643a44bd6106b4e7cf50fade"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.982096 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" event={"ID":"3cc31b0e-b225-470f-870b-f89666eae47b","Type":"ContainerStarted","Data":"387a1eb56b33e4478745eda33301343f11a70f6ae6cef77a020e24bc1ac16505"} Jan 22 11:50:04 crc kubenswrapper[5120]: I0122 11:50:04.996432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:04 crc kubenswrapper[5120]: E0122 11:50:04.998006 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.497993589 +0000 UTC m=+140.241941930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.021505 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d4ftw" event={"ID":"2380d23f-8320-4c77-9936-215ff48a32c8","Type":"ContainerStarted","Data":"791f59633a93d596d8eb4e587137b7720856db7cf4bfdde86d83a235a7b3ff49"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.052108 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" podStartSLOduration=120.052091209 podStartE2EDuration="2m0.052091209s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.007224633 +0000 UTC m=+139.751172974" watchObservedRunningTime="2026-01-22 11:50:05.052091209 +0000 UTC m=+139.796039550" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.053141 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-fhxb8" podStartSLOduration=120.053136695 podStartE2EDuration="2m0.053136695s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.051847223 +0000 UTC m=+139.795795564" watchObservedRunningTime="2026-01-22 11:50:05.053136695 +0000 UTC m=+139.797085036" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.071755 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerStarted","Data":"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.073218 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.085244 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" event={"ID":"6edfa4a4-fdb6-420f-ba3b-d984c4784817","Type":"ContainerStarted","Data":"094eaef27fd0f4e410c581069ab7db755c9d3ea46b5479bbd5ac1c9b695c1271"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.085700 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.093970 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.097763 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.098099 5120 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-x78dg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.098167 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" podUID="6edfa4a4-fdb6-420f-ba3b-d984c4784817" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.098690 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" event={"ID":"bdf4dfdb-f473-480e-ae44-570e99cf695f","Type":"ContainerStarted","Data":"0c1b1dbf4d25302aef4a3b8ca0cba857337e677baae977c3dc69f79fd0614971"} Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.100109 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.600084821 +0000 UTC m=+140.344033292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.107284 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" event={"ID":"f7fc5383-db19-483a-afb9-23d3f8065a64","Type":"ContainerStarted","Data":"7342d3c268de1747605fecb029c4815f4dcb52ed39a25ee2a26379bce32b37de"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.109572 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" event={"ID":"c5f50cf9-ffda-418c-a80d-9612ce61d429","Type":"ContainerStarted","Data":"b9dd7cadf64ccc3dac388ceebea1e51c1f42f8ece7fecc500e74806421639d00"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.112320 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" gracePeriod=30 Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.112547 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" event={"ID":"52bf18ab-85c0-49e5-8b9d-9cb67ec54297","Type":"ContainerStarted","Data":"93c07ec969a9a97147849760ea410885f992ff649f728a54c90d117f74984d18"} Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.112592 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.117091 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.117131 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.127660 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" podStartSLOduration=120.127640758 podStartE2EDuration="2m0.127640758s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.121444697 +0000 UTC m=+139.865393038" watchObservedRunningTime="2026-01-22 11:50:05.127640758 +0000 UTC m=+139.871589099" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.206853 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" podStartSLOduration=120.206838035 podStartE2EDuration="2m0.206838035s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.206379324 +0000 UTC m=+139.950327665" watchObservedRunningTime="2026-01-22 11:50:05.206838035 +0000 UTC m=+139.950786376" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.207779 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.208222 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.708208218 +0000 UTC m=+140.452156559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.209671 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" podStartSLOduration=120.209657613 podStartE2EDuration="2m0.209657613s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.153861763 +0000 UTC m=+139.897810094" watchObservedRunningTime="2026-01-22 11:50:05.209657613 +0000 UTC m=+139.953605954" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.224740 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49972: no serving certificate available for the kubelet" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.297789 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7nx8w" podStartSLOduration=120.297775296 podStartE2EDuration="2m0.297775296s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.29664653 +0000 UTC m=+140.040594871" watchObservedRunningTime="2026-01-22 11:50:05.297775296 +0000 UTC m=+140.041723637" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.311724 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.311896 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.811868247 +0000 UTC m=+140.555816588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.312054 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.313744 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.813731773 +0000 UTC m=+140.557680104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.349664 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" podStartSLOduration=120.349646762 podStartE2EDuration="2m0.349646762s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.327241419 +0000 UTC m=+140.071189770" watchObservedRunningTime="2026-01-22 11:50:05.349646762 +0000 UTC m=+140.093595103" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.351683 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" podStartSLOduration=120.351676961 podStartE2EDuration="2m0.351676961s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:05.349336744 +0000 UTC m=+140.093285085" watchObservedRunningTime="2026-01-22 11:50:05.351676961 +0000 UTC m=+140.095625302" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.413739 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414084 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414164 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.414185 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.415331 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:05.915308021 +0000 UTC m=+140.659256362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.416204 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.455999 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.456053 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.458691 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.515062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.515331 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.015319653 +0000 UTC m=+140.759267994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.616493 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.616895 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.116872581 +0000 UTC m=+140.860820912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.712145 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.719725 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.719911 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.720265 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.220250514 +0000 UTC m=+140.964199015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.728243 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.738791 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.744883 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dababdca-8afb-452f-865f-54de3aec21d9-metrics-certs\") pod \"network-metrics-daemon-ldwx4\" (UID: \"dababdca-8afb-452f-865f-54de3aec21d9\") " pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.821119 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.821477 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.321461534 +0000 UTC m=+141.065409875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.839357 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:05 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:05 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:05 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.839425 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.923851 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:05 crc kubenswrapper[5120]: E0122 11:50:05.924617 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.424604811 +0000 UTC m=+141.168553152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:05 crc kubenswrapper[5120]: I0122 11:50:05.998608 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ldwx4" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.030915 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.031237 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.531212732 +0000 UTC m=+141.275161073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.144790 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.145336 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.645310444 +0000 UTC m=+141.389258835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.198402 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" event={"ID":"9a52cc8b-fb68-4b1d-b91d-576f5ff59968","Type":"ContainerStarted","Data":"5db6d5c2a50a5f39d1371e72b2ed006990a86cb1cdf403f04c06514a998955eb"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.210817 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-2czqg" event={"ID":"c5f50cf9-ffda-418c-a80d-9612ce61d429","Type":"ContainerStarted","Data":"04cb570fb9c1caf8e1173096ac531c588bccaf8d9c558291659f8a8ecc1b5591"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.222705 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" event={"ID":"da2b1465-54c1-4a7d-8cb6-755b28e448b8","Type":"ContainerStarted","Data":"4022e6efe31ee6e9b9d8d1bec5819639bca01690ec2bb41567262317dee3871b"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.257248 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.259827 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.759796385 +0000 UTC m=+141.503744726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.281607 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" event={"ID":"52bf18ab-85c0-49e5-8b9d-9cb67ec54297","Type":"ContainerStarted","Data":"ca6b1a8ab55ad362e9597c1e1763f6036b5828d5534e485bd60767c936bf6289"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.291821 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8p7x7" podStartSLOduration=121.29180342 podStartE2EDuration="2m1.29180342s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.237478236 +0000 UTC m=+140.981426577" watchObservedRunningTime="2026-01-22 11:50:06.29180342 +0000 UTC m=+141.035751761" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.315504 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" event={"ID":"ea345128-daaf-464a-b774-8f8cf4c34aa5","Type":"ContainerStarted","Data":"ab3b328971f28941ddd22b65e7d5163afcdd956d4b27632a333bba7d1084f7d5"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.315651 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.317368 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" event={"ID":"fbaf6c98-c3db-488e-878a-d0b1b9779ea2","Type":"ContainerStarted","Data":"f3ed1e9d17f07b9ff8ff641e16c8ec8290b3f92da37e8725b5f87b1b6bea3441"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.341330 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" event={"ID":"699a5d41-d0b5-4d88-9448-4b3bad2cc424","Type":"ContainerStarted","Data":"969ed470d9d29c96fa6df5866c8ecf203eef71170171a74d2b9244433f7cc9e8"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.360213 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.361140 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.861120768 +0000 UTC m=+141.605069109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.368610 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-nzfjl" event={"ID":"061945e1-c5cb-4451-94ff-0fd4a53b4901","Type":"ContainerStarted","Data":"cecffe071504d0e8652bb22e3a48afb206a542a02b08701c3ce5860661e9b90a"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.371216 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-dp8rm" podStartSLOduration=121.371185652 podStartE2EDuration="2m1.371185652s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.290336444 +0000 UTC m=+141.034284785" watchObservedRunningTime="2026-01-22 11:50:06.371185652 +0000 UTC m=+141.115133993" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.372893 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" podStartSLOduration=121.372885163 podStartE2EDuration="2m1.372885163s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.372257118 +0000 UTC m=+141.116205469" watchObservedRunningTime="2026-01-22 11:50:06.372885163 +0000 UTC m=+141.116833504" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.394948 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d4ftw" event={"ID":"2380d23f-8320-4c77-9936-215ff48a32c8","Type":"ContainerStarted","Data":"d3493a2f20fcbd2363d735c66f1e13a922ac0c1bbf899a57e020458338cc9f0f"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.395028 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.406316 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-p98m2" podStartSLOduration=121.406289112 podStartE2EDuration="2m1.406289112s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.403378111 +0000 UTC m=+141.147326472" watchObservedRunningTime="2026-01-22 11:50:06.406289112 +0000 UTC m=+141.150237453" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.424042 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.424118 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.426240 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-kprrg" event={"ID":"f7fc5383-db19-483a-afb9-23d3f8065a64","Type":"ContainerStarted","Data":"89b310b1c6888b42555a029d549dad764a92a398406d68db44d007c1bac7a1d5"} Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.429779 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-dpf6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.429854 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.434622 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-fscmd" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.435059 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-bbphb" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.442200 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-j9r4l" podStartSLOduration=121.442184391 podStartE2EDuration="2m1.442184391s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.441407612 +0000 UTC m=+141.185355983" watchObservedRunningTime="2026-01-22 11:50:06.442184391 +0000 UTC m=+141.186132722" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.464385 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.470301 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:06.970266091 +0000 UTC m=+141.714214432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.517428 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-x78dg" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.572807 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.573254 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.073240543 +0000 UTC m=+141.817188884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.583854 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-d4ftw" podStartSLOduration=10.58382328 podStartE2EDuration="10.58382328s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:06.563738434 +0000 UTC m=+141.307686775" watchObservedRunningTime="2026-01-22 11:50:06.58382328 +0000 UTC m=+141.327771621" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.675905 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.676508 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.176482283 +0000 UTC m=+141.920430624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.779371 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.779922 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.279898826 +0000 UTC m=+142.023847167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.855720 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:06 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:06 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:06 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.855822 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.881272 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.881499 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.381481955 +0000 UTC m=+142.125430296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.891889 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ldwx4"] Jan 22 11:50:06 crc kubenswrapper[5120]: I0122 11:50:06.983101 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:06 crc kubenswrapper[5120]: E0122 11:50:06.984161 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.484140211 +0000 UTC m=+142.228088552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.084577 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.084814 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.584769817 +0000 UTC m=+142.328718158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.085409 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.085929 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.585922034 +0000 UTC m=+142.329870375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.187731 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.188387 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.688362475 +0000 UTC m=+142.432310816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.290403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.290977 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.790935278 +0000 UTC m=+142.534883619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.391425 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.391664 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.891621105 +0000 UTC m=+142.635569446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.392209 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.392592 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.892573148 +0000 UTC m=+142.636521489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.437050 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.443933 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"7b1e9d102e8db0363bd0252ff2d3d00b8a64dd89f5bd3ea4ae489c7f13d84514"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.443988 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"796f22c539f5b6405b5ebc58626c46e7b6b342ace4debb24f86e9df355d739a2"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.447880 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-xmvfk" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.447938 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" event={"ID":"dababdca-8afb-452f-865f-54de3aec21d9","Type":"ContainerStarted","Data":"8ca93c0558816b104d72abd3a4b7d593f0ad30aac045d6bc43a55c7bcea24291"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.469691 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"51a967e2e9bf24bc1a6860f69d464a517ec8466b18d4a6637df0d203fec7f26e"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.469753 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"3bff3a4f31db2c3e5cbebb768b5ec3a31c9ecb75cdb704dc9641d07c2f7d724b"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.493318 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.494018 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:07.994001413 +0000 UTC m=+142.737949754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.505113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"8a2c2cf1b1643793202feaa8cef3107f80f72418ea90c91c881bcca9bcd54a04"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.505180 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"2c84408190bcdbdd8efdacba3a20b75ea91752e65973998a0c50653ee3161892"} Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.505513 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.523031 5120 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-dpf6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.523097 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.603363 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.605745 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.105730138 +0000 UTC m=+142.849678479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.705365 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.705807 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.20578388 +0000 UTC m=+142.949732221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.723540 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.806855 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.807282 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.307264817 +0000 UTC m=+143.051213158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.837060 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:07 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:07 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:07 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.837199 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:07 crc kubenswrapper[5120]: I0122 11:50:07.910223 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:07 crc kubenswrapper[5120]: E0122 11:50:07.910788 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.410763313 +0000 UTC m=+143.154711654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.011645 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.011997 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.511983843 +0000 UTC m=+143.255932184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.112837 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.113009 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.612984338 +0000 UTC m=+143.356932689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.113092 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.113388 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.613380547 +0000 UTC m=+143.357328888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.214216 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.214632 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.714614909 +0000 UTC m=+143.458563250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.240438 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.240484 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.240547 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.245337 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.245526 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.245705 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.248444 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.263072 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.267379 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.281747 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.314569 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315316 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315384 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315402 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315531 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315682 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315858 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315892 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.315971 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.316089 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.316163 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.316614 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.816597317 +0000 UTC m=+143.560545658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.346561 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.347011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.368357 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-rkbh2" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417127 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417358 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417392 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417415 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417468 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417496 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417510 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417528 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417550 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417597 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417612 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.417640 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.418069 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.418144 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:08.918125615 +0000 UTC m=+143.662073956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.418375 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.418734 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.419484 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.419747 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.419840 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.472747 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"community-operators-2q8d8\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.472745 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"certified-operators-fztfm\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.488029 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"certified-operators-p26dp\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.518821 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.518886 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.518917 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.519000 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.520492 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.520933 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.020913453 +0000 UTC m=+143.764861784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.528441 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.563307 5120 generic.go:358] "Generic (PLEG): container finished" podID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerID="639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243" exitCode=0 Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.563454 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerDied","Data":"639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243"} Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.570934 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"community-operators-tbgcq\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.576817 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.593766 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" event={"ID":"dababdca-8afb-452f-865f-54de3aec21d9","Type":"ContainerStarted","Data":"5adcb8cefb95d4673c319746a905e8db0486fe17bfcd7800342363e85130ebad"} Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.595566 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.601133 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.622663 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.623547 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.123531308 +0000 UTC m=+143.867479649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.724467 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.726099 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.22607784 +0000 UTC m=+143.970026181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.736176 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.826376 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.827036 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.327001903 +0000 UTC m=+144.070950244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.851242 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:08 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:08 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:08 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.851510 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:08 crc kubenswrapper[5120]: I0122 11:50:08.931035 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:08 crc kubenswrapper[5120]: E0122 11:50:08.931350 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.431337499 +0000 UTC m=+144.175285840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.032274 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.032643 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.532626631 +0000 UTC m=+144.276574962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.133800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.134260 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.63423849 +0000 UTC m=+144.378186831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.240918 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.241058 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.741033266 +0000 UTC m=+144.484981607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.241453 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.241851 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.741839455 +0000 UTC m=+144.485787796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.351445 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.352025 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.851999443 +0000 UTC m=+144.595947784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.372776 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.373232 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.454475 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.455437 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:09.955405026 +0000 UTC m=+144.699353367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.556732 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.557166 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.057149848 +0000 UTC m=+144.801098189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.631881 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ldwx4" event={"ID":"dababdca-8afb-452f-865f-54de3aec21d9","Type":"ContainerStarted","Data":"d22a429ee45feef375aced9d7691d9985386b7a0d534582318023895a48f3b59"} Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.670986 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.673527 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.173502686 +0000 UTC m=+144.917451027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.696769 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-ldwx4" podStartSLOduration=124.696733158 podStartE2EDuration="2m4.696733158s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:09.677262466 +0000 UTC m=+144.421210797" watchObservedRunningTime="2026-01-22 11:50:09.696733158 +0000 UTC m=+144.440681499" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.698769 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.712330 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.720419 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.725830 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.772934 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.773307 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.773408 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.773465 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.773611 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.273582208 +0000 UTC m=+145.017530559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.815583 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.815633 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.820819 5120 patch_prober.go:28] interesting pod/console-64d44f6ddf-7q8jr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.820922 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7q8jr" podUID="efec95f9-a526-41f9-bd7c-0d1bd2505eda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.834800 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.839127 5120 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-7x2rm container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 11:50:09 crc kubenswrapper[5120]: [-]has-synced failed: reason withheld Jan 22 11:50:09 crc kubenswrapper[5120]: [+]process-running ok Jan 22 11:50:09 crc kubenswrapper[5120]: healthz check failed Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.839205 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" podUID="5e1bcfb8-8fae-4947-a078-c38b69596998" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.846223 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875331 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875445 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875612 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.875708 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.877184 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.878294 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.378271823 +0000 UTC m=+145.122220334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.879452 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.913023 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.947200 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.948862 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"redhat-marketplace-rp8qf\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.972949 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.976967 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.977106 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.477073994 +0000 UTC m=+145.221022335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:09 crc kubenswrapper[5120]: I0122 11:50:09.977534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:09 crc kubenswrapper[5120]: E0122 11:50:09.978043 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.478025077 +0000 UTC m=+145.221973418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.039092 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.081738 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.082152 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.582135627 +0000 UTC m=+145.326083968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.089739 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.102503 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.102842 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.152705 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184706 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184761 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184807 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.184841 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.185230 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.685213153 +0000 UTC m=+145.429161494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.286733 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.286946 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") pod \"2667e960-0d1a-4c78-97ea-b1852f27ce17\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287030 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") pod \"2667e960-0d1a-4c78-97ea-b1852f27ce17\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287181 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") pod \"2667e960-0d1a-4c78-97ea-b1852f27ce17\" (UID: \"2667e960-0d1a-4c78-97ea-b1852f27ce17\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287393 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.287615 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.787399347 +0000 UTC m=+145.531347688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.287907 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.288285 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.288608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.288768 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.289121 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.789112848 +0000 UTC m=+145.533061189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.291220 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume" (OuterVolumeSpecName: "config-volume") pod "2667e960-0d1a-4c78-97ea-b1852f27ce17" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.303229 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2667e960-0d1a-4c78-97ea-b1852f27ce17" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.318434 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm" (OuterVolumeSpecName: "kube-api-access-kl2wm") pod "2667e960-0d1a-4c78-97ea-b1852f27ce17" (UID: "2667e960-0d1a-4c78-97ea-b1852f27ce17"). InnerVolumeSpecName "kube-api-access-kl2wm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.319528 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"redhat-marketplace-z5nvn\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.391747 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.392022 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.891951208 +0000 UTC m=+145.635899549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.392433 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2667e960-0d1a-4c78-97ea-b1852f27ce17-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.392462 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2667e960-0d1a-4c78-97ea-b1852f27ce17-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.392474 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kl2wm\" (UniqueName: \"kubernetes.io/projected/2667e960-0d1a-4c78-97ea-b1852f27ce17-kube-api-access-kl2wm\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.412894 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49984: no serving certificate available for the kubelet" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.419038 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.471341 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.471973 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerName="collect-profiles" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.471991 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerName="collect-profiles" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.472091 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" containerName="collect-profiles" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.480978 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.481582 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.485092 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.485420 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.493622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.494031 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:10.994015969 +0000 UTC m=+145.737964310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.495909 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.596908 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.597184 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.597286 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.597412 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.097395971 +0000 UTC m=+145.841344312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.687297 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" event={"ID":"2667e960-0d1a-4c78-97ea-b1852f27ce17","Type":"ContainerDied","Data":"d4824bab9e53014c1adf60d5f2c167746888e2b25de0388cf1bcad99ffd70500"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.688477 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4824bab9e53014c1adf60d5f2c167746888e2b25de0388cf1bcad99ffd70500" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.688671 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.694641 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerStarted","Data":"b88cdc87cf3e9924bb751ee1a18fd60cd70c52d60437b53a435f731721d1f00b"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.701819 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.702063 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.702353 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.702937 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.703246 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.203232984 +0000 UTC m=+145.947181325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.734122 5120 generic.go:358] "Generic (PLEG): container finished" podID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerID="7de27767f0a768c4d8be8f2a9463a108ad7455645c4ac170a6ce680c9ed560d4" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.734563 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"7de27767f0a768c4d8be8f2a9463a108ad7455645c4ac170a6ce680c9ed560d4"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.734706 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerStarted","Data":"d7e449df56d4aa55bd535980c4c65253f3325cde543e24f2634b3227e292a791"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.740844 5120 generic.go:358] "Generic (PLEG): container finished" podID="089fc2c1-8274-4532-a14a-21194d01a310" containerID="8c8add6d6346bffb920d193189f09708f0ce72391c85a3b8f9fe5d165b2e4b5d" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.741009 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"8c8add6d6346bffb920d193189f09708f0ce72391c85a3b8f9fe5d165b2e4b5d"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.741047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerStarted","Data":"408feb4598d3b1d5ae322e87417dab316fa1b75c632f7ace01cbd6d89c0b3941"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.755194 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.765928 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.766204 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.766251 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerStarted","Data":"942f286364f00775972ff57ef7ee9a1b6d83531d392b957342335e79a3c8a683"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.774979 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"2f538daad7777bd0dc15f7e658704af0591513bbc56f30de7eaeb6e9ec113474"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.778122 5120 generic.go:358] "Generic (PLEG): container finished" podID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" exitCode=0 Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.779923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.779974 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerStarted","Data":"1b3c4ff9732c93011b494f79b9052c81bdd854fe832d0d1aff9714069c08086b"} Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.805139 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.806644 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.306618707 +0000 UTC m=+146.050567048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.813343 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:10 crc kubenswrapper[5120]: W0122 11:50:10.840574 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a WatchSource:0}: Error finding container 78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a: Status 404 returned error can't find the container with id 78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.848831 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.853813 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-7x2rm" Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.890754 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.908643 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:10 crc kubenswrapper[5120]: E0122 11:50:10.909525 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.409507337 +0000 UTC m=+146.153455678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:10 crc kubenswrapper[5120]: I0122 11:50:10.941586 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.011694 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.012084 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.51205851 +0000 UTC m=+146.256006841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.113847 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.114332 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.614312536 +0000 UTC m=+146.358260877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.213559 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.213716 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.214517 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.214921 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.7149048 +0000 UTC m=+146.458853141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.222489 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.315984 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.316066 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.316094 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.316120 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.316490 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.816470059 +0000 UTC m=+146.560418400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.335550 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.362748 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.362994 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.364254 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.417714 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.418373 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.918338216 +0000 UTC m=+146.662286567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418715 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418759 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418779 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418803 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418900 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418922 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.418970 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.419462 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.419695 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.419728 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:11.919716109 +0000 UTC m=+146.663664610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.444682 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"redhat-operators-t67f7\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.520438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.520878 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.020858017 +0000 UTC m=+146.764806358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.520951 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.521003 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.521030 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.521649 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.522478 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.550228 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.558640 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"redhat-operators-mbm7w\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.624606 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.624660 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.124640699 +0000 UTC m=+146.868589030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.725611 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.726046 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.226028825 +0000 UTC m=+146.969977166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.805643 5120 generic.go:358] "Generic (PLEG): container finished" podID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerID="f7fd7cbfe79a1adebb0cfbd3dc66028444cc6622806f14ca6c6694184f1c03cf" exitCode=0 Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.805789 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"f7fd7cbfe79a1adebb0cfbd3dc66028444cc6622806f14ca6c6694184f1c03cf"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.805858 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerStarted","Data":"78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.819837 5120 generic.go:358] "Generic (PLEG): container finished" podID="316646c5-1898-417a-8bd7-00eeadfe1243" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" exitCode=0 Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.820215 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.828402 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.833841 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.834337 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.334316116 +0000 UTC m=+147.078264627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.836207 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"20ed0804-5c2e-4054-a7af-c90d2103aacb","Type":"ContainerStarted","Data":"18125d47d2426af9cc47b5088c7d2ff08b796e103e35e0674e0ef49d47cd98bb"} Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.935030 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.935869 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.435827012 +0000 UTC m=+147.179775353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.941478 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:11 crc kubenswrapper[5120]: E0122 11:50:11.943279 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.443237653 +0000 UTC m=+147.187186004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:11 crc kubenswrapper[5120]: I0122 11:50:11.992923 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.027159 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.036191 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.043277 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.043555 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.5435349 +0000 UTC m=+147.287483241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.046996 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.047066 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.159987 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.160447 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.66042782 +0000 UTC m=+147.404376311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.262067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.262274 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.762244994 +0000 UTC m=+147.506193335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.262567 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.263180 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.763172417 +0000 UTC m=+147.507120758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.335860 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.364627 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.365064 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.865047374 +0000 UTC m=+147.608995705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.466000 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.466442 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:12.966427207 +0000 UTC m=+147.710375548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.567906 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.568324 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.068307464 +0000 UTC m=+147.812255805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.671677 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.672036 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.172024245 +0000 UTC m=+147.915972586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.732096 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.773887 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.774278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.27426119 +0000 UTC m=+148.018209531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.854830 5120 generic.go:358] "Generic (PLEG): container finished" podID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerID="1cd95b44bb4d0252e12b33d01daf7c5bffc97e700eedfc0c02f19f25cf6b8dca" exitCode=0 Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.855029 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"20ed0804-5c2e-4054-a7af-c90d2103aacb","Type":"ContainerDied","Data":"1cd95b44bb4d0252e12b33d01daf7c5bffc97e700eedfc0c02f19f25cf6b8dca"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.869228 5120 generic.go:358] "Generic (PLEG): container finished" podID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerID="225b2e979aa1449106827d89e2af943939a02a67507731955126d01302822780" exitCode=0 Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.869704 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"225b2e979aa1449106827d89e2af943939a02a67507731955126d01302822780"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.869748 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerStarted","Data":"f088b06a5bed8fcb72cf992ec4dfa09770bed17e70fa6aa78bd0452016efb6e5"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.878125 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.878651 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.378635897 +0000 UTC m=+148.122584238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.883790 5120 generic.go:358] "Generic (PLEG): container finished" podID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" exitCode=0 Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.883885 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.883923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerStarted","Data":"ab803e6a4d6bc8f6c5535f7b6ba4ab7280d0c0d527dc407d8f992ddd6ad5d49c"} Jan 22 11:50:12 crc kubenswrapper[5120]: I0122 11:50:12.980460 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:12 crc kubenswrapper[5120]: E0122 11:50:12.981105 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.481089257 +0000 UTC m=+148.225037598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.083607 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.083944 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.583931646 +0000 UTC m=+148.327879987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.184677 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.184911 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.684883 +0000 UTC m=+148.428831341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.187068 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.187892 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.687878253 +0000 UTC m=+148.431826594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.188536 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.289041 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.289278 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.789235927 +0000 UTC m=+148.533184268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.290158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.290846 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.790833965 +0000 UTC m=+148.534782306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.398012 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.398433 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:13.898403659 +0000 UTC m=+148.642352000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.501062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.501445 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.001428974 +0000 UTC m=+148.745377315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.602380 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.602762 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.102722746 +0000 UTC m=+148.846671087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.704293 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.704778 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.204753206 +0000 UTC m=+148.948701547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.805914 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.806185 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.30613163 +0000 UTC m=+149.050079971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.807539 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.808256 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.308235511 +0000 UTC m=+149.052183852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.811484 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.816414 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.816503 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.830732 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.892370 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"ff77e9e03e96e6345d93dc85455d6e2c23cacd600f28bb808b09581d7fc1076a"} Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.928256 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.928401 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.428372529 +0000 UTC m=+149.172320870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.929319 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.929485 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:13 crc kubenswrapper[5120]: I0122 11:50:13.929716 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:13 crc kubenswrapper[5120]: E0122 11:50:13.929732 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.429709091 +0000 UTC m=+149.173657432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031052 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.031343 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.531312211 +0000 UTC m=+149.275260552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031694 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031747 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.031825 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.032018 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.032420 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.532393587 +0000 UTC m=+149.276342098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.056351 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.133380 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.133670 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.633624628 +0000 UTC m=+149.377572969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.133772 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.134307 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.634287894 +0000 UTC m=+149.378236235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.140948 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.227239 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235051 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.235240 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.735212298 +0000 UTC m=+149.479160629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235407 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") pod \"20ed0804-5c2e-4054-a7af-c90d2103aacb\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235494 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") pod \"20ed0804-5c2e-4054-a7af-c90d2103aacb\" (UID: \"20ed0804-5c2e-4054-a7af-c90d2103aacb\") " Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.235506 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "20ed0804-5c2e-4054-a7af-c90d2103aacb" (UID: "20ed0804-5c2e-4054-a7af-c90d2103aacb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.236019 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.236138 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ed0804-5c2e-4054-a7af-c90d2103aacb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.236403 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.736375335 +0000 UTC m=+149.480323676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.271899 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "20ed0804-5c2e-4054-a7af-c90d2103aacb" (UID: "20ed0804-5c2e-4054-a7af-c90d2103aacb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.337920 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.338077 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.838047097 +0000 UTC m=+149.581995438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.338266 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.338440 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ed0804-5c2e-4054-a7af-c90d2103aacb-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.338888 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.838879097 +0000 UTC m=+149.582827428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.439998 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.440272 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.94021386 +0000 UTC m=+149.684162201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.440918 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.441580 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:14.941570753 +0000 UTC m=+149.685519084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.476838 5120 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.534837 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.541872 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.542138 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-22 11:50:15.042114607 +0000 UTC m=+149.786062948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.644249 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: E0122 11:50:14.645475 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-22 11:50:15.145414358 +0000 UTC m=+149.889362699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-49gkx" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.744733 5120 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T11:50:14.476895218Z","UUID":"c7e8900c-100e-4568-826d-b82a525ec5a2","Handler":null,"Name":"","Endpoint":""} Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.756356 5120 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.756409 5120 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.756850 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.761532 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.858799 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.862763 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.862817 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.892008 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-49gkx\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.904677 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerStarted","Data":"f6539fac927736fe00ed8becb89b97e99fa82f09b5f1b989a5e7d7d1eb99b316"} Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.907769 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"20ed0804-5c2e-4054-a7af-c90d2103aacb","Type":"ContainerDied","Data":"18125d47d2426af9cc47b5088c7d2ff08b796e103e35e0674e0ef49d47cd98bb"} Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.907826 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18125d47d2426af9cc47b5088c7d2ff08b796e103e35e0674e0ef49d47cd98bb" Jan 22 11:50:14 crc kubenswrapper[5120]: I0122 11:50:14.907839 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.080612 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.347430 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.595488 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.937785 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerStarted","Data":"da7f6775170b711deee1d912dd17a150e8dc85403363664ea0cfd6e6d2a35197"} Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.941799 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerStarted","Data":"30738daefd26ec1936e210196218667fac004e9fbe6021d4a2265a6c692aabac"} Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.962035 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.96202136 podStartE2EDuration="2.96202136s" podCreationTimestamp="2026-01-22 11:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:15.961463537 +0000 UTC m=+150.705411878" watchObservedRunningTime="2026-01-22 11:50:15.96202136 +0000 UTC m=+150.705969701" Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.976684 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"d0b13f4b17fa46610768ed8124e834029a023c21324df03453e3ee2901184dce"} Jan 22 11:50:15 crc kubenswrapper[5120]: I0122 11:50:15.976765 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" event={"ID":"d0f9dd1c-1fa6-44f9-b929-bd81b57d63f2","Type":"ContainerStarted","Data":"00c2b34379e711ae18744911c1a948b8f3eaad8ba2b87b458e2e44e2eed2a37e"} Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.007652 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lsqq6" podStartSLOduration=20.007622504 podStartE2EDuration="20.007622504s" podCreationTimestamp="2026-01-22 11:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:16.003006902 +0000 UTC m=+150.746955263" watchObservedRunningTime="2026-01-22 11:50:16.007622504 +0000 UTC m=+150.751570865" Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.424360 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.424916 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.527796 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-d4ftw" Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.998782 5120 generic.go:358] "Generic (PLEG): container finished" podID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerID="da7f6775170b711deee1d912dd17a150e8dc85403363664ea0cfd6e6d2a35197" exitCode=0 Jan 22 11:50:16 crc kubenswrapper[5120]: I0122 11:50:16.998848 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerDied","Data":"da7f6775170b711deee1d912dd17a150e8dc85403363664ea0cfd6e6d2a35197"} Jan 22 11:50:17 crc kubenswrapper[5120]: I0122 11:50:17.511737 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.016136 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerStarted","Data":"e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c"} Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.017015 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.373171 5120 patch_prober.go:28] interesting pod/downloads-747b44746d-btnnz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.373382 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-btnnz" podUID="a1372d1c-9557-4da9-b571-ea78602f491f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.817616 5120 patch_prober.go:28] interesting pod/console-64d44f6ddf-7q8jr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 22 11:50:19 crc kubenswrapper[5120]: I0122 11:50:19.817684 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7q8jr" podUID="efec95f9-a526-41f9-bd7c-0d1bd2505eda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 22 11:50:20 crc kubenswrapper[5120]: I0122 11:50:20.689413 5120 ???:1] "http: TLS handshake error from 192.168.126.11:49334: no serving certificate available for the kubelet" Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.020381 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.022428 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.024059 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:22 crc kubenswrapper[5120]: E0122 11:50:22.024143 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:26 crc kubenswrapper[5120]: I0122 11:50:26.439789 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-btnnz" Jan 22 11:50:26 crc kubenswrapper[5120]: I0122 11:50:26.463827 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" podStartSLOduration=141.463799641 podStartE2EDuration="2m21.463799641s" podCreationTimestamp="2026-01-22 11:48:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:19.059578207 +0000 UTC m=+153.803526548" watchObservedRunningTime="2026-01-22 11:50:26.463799641 +0000 UTC m=+161.207747982" Jan 22 11:50:29 crc kubenswrapper[5120]: I0122 11:50:29.820916 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:29 crc kubenswrapper[5120]: I0122 11:50:29.826775 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-7q8jr" Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.024122 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.026524 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.028399 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:32 crc kubenswrapper[5120]: E0122 11:50:32.028446 5120 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:37 crc kubenswrapper[5120]: I0122 11:50:37.515272 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-9hjpw" Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.148357 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mddkn_48ce43ae-5f5f-4ae6-91bd-98390a12c650/kube-multus-additional-cni-plugins/0.log" Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.148431 5120 generic.go:358] "Generic (PLEG): container finished" podID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" exitCode=137 Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.148632 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerDied","Data":"b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f"} Jan 22 11:50:39 crc kubenswrapper[5120]: I0122 11:50:39.435075 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 22 11:50:40 crc kubenswrapper[5120]: I0122 11:50:40.027203 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:50:41 crc kubenswrapper[5120]: I0122 11:50:41.196434 5120 ???:1] "http: TLS handshake error from 192.168.126.11:48732: no serving certificate available for the kubelet" Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.019404 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.019976 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.020287 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 22 11:50:42 crc kubenswrapper[5120]: E0122 11:50:42.020519 5120 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.010468 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.101971 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") pod \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.102180 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") pod \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\" (UID: \"1144df8b-88aa-4dd2-9b2c-ba41340bed9f\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.102189 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1144df8b-88aa-4dd2-9b2c-ba41340bed9f" (UID: "1144df8b-88aa-4dd2-9b2c-ba41340bed9f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.104106 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.110448 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1144df8b-88aa-4dd2-9b2c-ba41340bed9f" (UID: "1144df8b-88aa-4dd2-9b2c-ba41340bed9f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.158020 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mddkn_48ce43ae-5f5f-4ae6-91bd-98390a12c650/kube-multus-additional-cni-plugins/0.log" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.158100 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.191131 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"1144df8b-88aa-4dd2-9b2c-ba41340bed9f","Type":"ContainerDied","Data":"f6539fac927736fe00ed8becb89b97e99fa82f09b5f1b989a5e7d7d1eb99b316"} Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.191171 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6539fac927736fe00ed8becb89b97e99fa82f09b5f1b989a5e7d7d1eb99b316" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.191280 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.195926 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mddkn_48ce43ae-5f5f-4ae6-91bd-98390a12c650/kube-multus-additional-cni-plugins/0.log" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.196064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" event={"ID":"48ce43ae-5f5f-4ae6-91bd-98390a12c650","Type":"ContainerDied","Data":"224c53d4c2e0d2802958ae5a4e8f3773f21300049c7b7357bf9e459ec82f1d55"} Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.196107 5120 scope.go:117] "RemoveContainer" containerID="b6626dbcfe2359c8932616225dead34356537fe01ca973f60304e807a266661f" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.196141 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mddkn" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.205500 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.205585 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.205651 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206370 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready" (OuterVolumeSpecName: "ready") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206558 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") pod \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\" (UID: \"48ce43ae-5f5f-4ae6-91bd-98390a12c650\") " Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206593 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.206635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207582 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1144df8b-88aa-4dd2-9b2c-ba41340bed9f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207611 5120 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48ce43ae-5f5f-4ae6-91bd-98390a12c650-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207626 5120 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/48ce43ae-5f5f-4ae6-91bd-98390a12c650-ready\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.207639 5120 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48ce43ae-5f5f-4ae6-91bd-98390a12c650-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.209901 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5" (OuterVolumeSpecName: "kube-api-access-mdjp5") pod "48ce43ae-5f5f-4ae6-91bd-98390a12c650" (UID: "48ce43ae-5f5f-4ae6-91bd-98390a12c650"). InnerVolumeSpecName "kube-api-access-mdjp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.309147 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mdjp5\" (UniqueName: \"kubernetes.io/projected/48ce43ae-5f5f-4ae6-91bd-98390a12c650-kube-api-access-mdjp5\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.565837 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:50:45 crc kubenswrapper[5120]: I0122 11:50:45.586806 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mddkn"] Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.206677 5120 generic.go:358] "Generic (PLEG): container finished" podID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.206742 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.211943 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerStarted","Data":"0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.215147 5120 generic.go:358] "Generic (PLEG): container finished" podID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerID="04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.215289 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.225050 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerStarted","Data":"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.238164 5120 generic.go:358] "Generic (PLEG): container finished" podID="316646c5-1898-417a-8bd7-00eeadfe1243" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.238264 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.241596 5120 generic.go:358] "Generic (PLEG): container finished" podID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerID="c75872699b265f647f93429326d1a8652dfa1cbe0ac2767c1c24f307072383a1" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.241749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"c75872699b265f647f93429326d1a8652dfa1cbe0ac2767c1c24f307072383a1"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.259332 5120 generic.go:358] "Generic (PLEG): container finished" podID="089fc2c1-8274-4532-a14a-21194d01a310" containerID="9bc291a555447cad49a14283506bdb0035ead9ce2860615680f3af52e9dceda9" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.259484 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"9bc291a555447cad49a14283506bdb0035ead9ce2860615680f3af52e9dceda9"} Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.263365 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" exitCode=0 Jan 22 11:50:46 crc kubenswrapper[5120]: I0122 11:50:46.263413 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe"} Jan 22 11:50:46 crc kubenswrapper[5120]: E0122 11:50:46.751029 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.274736 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerStarted","Data":"a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.277059 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerStarted","Data":"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.279334 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerStarted","Data":"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.281731 5120 generic.go:358] "Generic (PLEG): container finished" podID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerID="0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59" exitCode=0 Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.281794 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.284536 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerStarted","Data":"e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.288678 5120 generic.go:358] "Generic (PLEG): container finished" podID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" exitCode=0 Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.288881 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.297113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerStarted","Data":"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.301013 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerStarted","Data":"f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486"} Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.308061 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p26dp" podStartSLOduration=4.915322265 podStartE2EDuration="39.30804356s" podCreationTimestamp="2026-01-22 11:50:08 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.742632237 +0000 UTC m=+145.486580578" lastFinishedPulling="2026-01-22 11:50:45.135353522 +0000 UTC m=+179.879301873" observedRunningTime="2026-01-22 11:50:47.304941204 +0000 UTC m=+182.048889555" watchObservedRunningTime="2026-01-22 11:50:47.30804356 +0000 UTC m=+182.051991901" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.347747 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z5nvn" podStartSLOduration=4.064584476 podStartE2EDuration="37.34772834s" podCreationTimestamp="2026-01-22 11:50:10 +0000 UTC" firstStartedPulling="2026-01-22 11:50:11.807198018 +0000 UTC m=+146.551146359" lastFinishedPulling="2026-01-22 11:50:45.090341882 +0000 UTC m=+179.834290223" observedRunningTime="2026-01-22 11:50:47.329890748 +0000 UTC m=+182.073839089" watchObservedRunningTime="2026-01-22 11:50:47.34772834 +0000 UTC m=+182.091676681" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.349489 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fztfm" podStartSLOduration=5.980882182 podStartE2EDuration="40.349482532s" podCreationTimestamp="2026-01-22 11:50:07 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.767167382 +0000 UTC m=+145.511115723" lastFinishedPulling="2026-01-22 11:50:45.135767732 +0000 UTC m=+179.879716073" observedRunningTime="2026-01-22 11:50:47.348109179 +0000 UTC m=+182.092057540" watchObservedRunningTime="2026-01-22 11:50:47.349482532 +0000 UTC m=+182.093430873" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.415177 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2q8d8" podStartSLOduration=6.037479351 podStartE2EDuration="40.415152912s" podCreationTimestamp="2026-01-22 11:50:07 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.779753286 +0000 UTC m=+145.523701627" lastFinishedPulling="2026-01-22 11:50:45.157426857 +0000 UTC m=+179.901375188" observedRunningTime="2026-01-22 11:50:47.413519463 +0000 UTC m=+182.157467804" watchObservedRunningTime="2026-01-22 11:50:47.415152912 +0000 UTC m=+182.159101243" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.500866 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rp8qf" podStartSLOduration=5.193443166 podStartE2EDuration="38.500836726s" podCreationTimestamp="2026-01-22 11:50:09 +0000 UTC" firstStartedPulling="2026-01-22 11:50:11.82168501 +0000 UTC m=+146.565633351" lastFinishedPulling="2026-01-22 11:50:45.12907857 +0000 UTC m=+179.873026911" observedRunningTime="2026-01-22 11:50:47.467716765 +0000 UTC m=+182.211665106" watchObservedRunningTime="2026-01-22 11:50:47.500836726 +0000 UTC m=+182.244785067" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.501909 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tbgcq" podStartSLOduration=5.115966352 podStartE2EDuration="39.501900182s" podCreationTimestamp="2026-01-22 11:50:08 +0000 UTC" firstStartedPulling="2026-01-22 11:50:10.738000306 +0000 UTC m=+145.481948647" lastFinishedPulling="2026-01-22 11:50:45.123934136 +0000 UTC m=+179.867882477" observedRunningTime="2026-01-22 11:50:47.498451539 +0000 UTC m=+182.242399890" watchObservedRunningTime="2026-01-22 11:50:47.501900182 +0000 UTC m=+182.245848523" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.562626 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.563786 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.563897 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.563989 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564043 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564096 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564154 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564326 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1144df8b-88aa-4dd2-9b2c-ba41340bed9f" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564400 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" containerName="kube-multus-additional-cni-plugins" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.564457 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="20ed0804-5c2e-4054-a7af-c90d2103aacb" containerName="pruner" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.568308 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.576398 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.578394 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.579478 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ce43ae-5f5f-4ae6-91bd-98390a12c650" path="/var/lib/kubelet/pods/48ce43ae-5f5f-4ae6-91bd-98390a12c650/volumes" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.580257 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.670872 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.671378 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.772476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.772588 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.772668 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.798197 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:47 crc kubenswrapper[5120]: I0122 11:50:47.882771 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.272794 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 22 11:50:48 crc kubenswrapper[5120]: W0122 11:50:48.283553 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod017df5fc_18b4_45b8_af70_249c5434d3dd.slice/crio-079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54 WatchSource:0}: Error finding container 079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54: Status 404 returned error can't find the container with id 079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54 Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.314257 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerStarted","Data":"f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2"} Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.319507 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerStarted","Data":"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd"} Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.323810 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerStarted","Data":"079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54"} Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.368705 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t67f7" podStartSLOduration=6.034302011 podStartE2EDuration="38.368679945s" podCreationTimestamp="2026-01-22 11:50:10 +0000 UTC" firstStartedPulling="2026-01-22 11:50:12.885481693 +0000 UTC m=+147.629430034" lastFinishedPulling="2026-01-22 11:50:45.219859637 +0000 UTC m=+179.963807968" observedRunningTime="2026-01-22 11:50:48.368309137 +0000 UTC m=+183.112257488" watchObservedRunningTime="2026-01-22 11:50:48.368679945 +0000 UTC m=+183.112628286" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.368909 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbm7w" podStartSLOduration=5.101359673 podStartE2EDuration="37.368905161s" podCreationTimestamp="2026-01-22 11:50:11 +0000 UTC" firstStartedPulling="2026-01-22 11:50:12.871510274 +0000 UTC m=+147.615458615" lastFinishedPulling="2026-01-22 11:50:45.139055762 +0000 UTC m=+179.883004103" observedRunningTime="2026-01-22 11:50:48.339433177 +0000 UTC m=+183.083381538" watchObservedRunningTime="2026-01-22 11:50:48.368905161 +0000 UTC m=+183.112853492" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.577878 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.577945 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.596357 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.596432 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.602356 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.602449 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.702777 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.714790 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.737119 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:48 crc kubenswrapper[5120]: I0122 11:50:48.737162 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.334993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerStarted","Data":"50ddb36530baaab6da9a203e91790393b50ad35da33e0c7be9ca4f1650c4872d"} Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.357051 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.357015451 podStartE2EDuration="2.357015451s" podCreationTimestamp="2026-01-22 11:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:49.352115273 +0000 UTC m=+184.096063604" watchObservedRunningTime="2026-01-22 11:50:49.357015451 +0000 UTC m=+184.100963792" Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.714187 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2q8d8" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:49 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:49 crc kubenswrapper[5120]: > Jan 22 11:50:49 crc kubenswrapper[5120]: I0122 11:50:49.779747 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tbgcq" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:49 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:49 crc kubenswrapper[5120]: > Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.041027 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.041099 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.101157 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.341627 5120 generic.go:358] "Generic (PLEG): container finished" podID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerID="50ddb36530baaab6da9a203e91790393b50ad35da33e0c7be9ca4f1650c4872d" exitCode=0 Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.341722 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerDied","Data":"50ddb36530baaab6da9a203e91790393b50ad35da33e0c7be9ca4f1650c4872d"} Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.420731 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.420786 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:50 crc kubenswrapper[5120]: I0122 11:50:50.490894 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.394589 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.395050 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.551913 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.552387 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.642067 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.731710 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") pod \"017df5fc-18b4-45b8-af70-249c5434d3dd\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.731773 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") pod \"017df5fc-18b4-45b8-af70-249c5434d3dd\" (UID: \"017df5fc-18b4-45b8-af70-249c5434d3dd\") " Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.731976 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "017df5fc-18b4-45b8-af70-249c5434d3dd" (UID: "017df5fc-18b4-45b8-af70-249c5434d3dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.732199 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/017df5fc-18b4-45b8-af70-249c5434d3dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.750704 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "017df5fc-18b4-45b8-af70-249c5434d3dd" (UID: "017df5fc-18b4-45b8-af70-249c5434d3dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.829396 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.829440 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:50:51 crc kubenswrapper[5120]: I0122 11:50:51.833371 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/017df5fc-18b4-45b8-af70-249c5434d3dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.355832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.355875 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"017df5fc-18b4-45b8-af70-249c5434d3dd","Type":"ContainerDied","Data":"079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54"} Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.356167 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079a446d9ff3699aaec584981e4b14121451442a9df70678d25ce59d3766ab54" Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.592895 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t67f7" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:52 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:52 crc kubenswrapper[5120]: > Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.645737 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:52 crc kubenswrapper[5120]: I0122 11:50:52.869847 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbm7w" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" probeResult="failure" output=< Jan 22 11:50:52 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 11:50:52 crc kubenswrapper[5120]: > Jan 22 11:50:53 crc kubenswrapper[5120]: I0122 11:50:53.360554 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z5nvn" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" containerID="cri-o://e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0" gracePeriod=2 Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.368349 5120 generic.go:358] "Generic (PLEG): container finished" podID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerID="e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0" exitCode=0 Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.368486 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0"} Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.562520 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.563597 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerName="pruner" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.563627 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerName="pruner" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.564043 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="017df5fc-18b4-45b8-af70-249c5434d3dd" containerName="pruner" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.576595 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.576771 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.579134 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.579903 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.674799 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.674902 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.675007 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776713 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776784 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776815 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776904 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.776943 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.796234 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"installer-12-crc\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.824674 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.911575 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.979795 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") pod \"5a52d1c0-c55c-47b4-936e-a783304a0e89\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.980048 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") pod \"5a52d1c0-c55c-47b4-936e-a783304a0e89\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.980088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") pod \"5a52d1c0-c55c-47b4-936e-a783304a0e89\" (UID: \"5a52d1c0-c55c-47b4-936e-a783304a0e89\") " Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.981225 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities" (OuterVolumeSpecName: "utilities") pod "5a52d1c0-c55c-47b4-936e-a783304a0e89" (UID: "5a52d1c0-c55c-47b4-936e-a783304a0e89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.985456 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj" (OuterVolumeSpecName: "kube-api-access-2ctgj") pod "5a52d1c0-c55c-47b4-936e-a783304a0e89" (UID: "5a52d1c0-c55c-47b4-936e-a783304a0e89"). InnerVolumeSpecName "kube-api-access-2ctgj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:50:54 crc kubenswrapper[5120]: I0122 11:50:54.995930 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a52d1c0-c55c-47b4-936e-a783304a0e89" (UID: "5a52d1c0-c55c-47b4-936e-a783304a0e89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.081274 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.081319 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52d1c0-c55c-47b4-936e-a783304a0e89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.081332 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2ctgj\" (UniqueName: \"kubernetes.io/projected/5a52d1c0-c55c-47b4-936e-a783304a0e89-kube-api-access-2ctgj\") on node \"crc\" DevicePath \"\"" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.128880 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.375078 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerStarted","Data":"c47f56a7ba94352bdbc302b5089a5a57c1a67692d87e9c910901f243c667c377"} Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.378439 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5nvn" event={"ID":"5a52d1c0-c55c-47b4-936e-a783304a0e89","Type":"ContainerDied","Data":"78c9a69e1fa99c2e87a7582c593ca2b6cefde510daa7b05fc0d9db0261917a2a"} Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.378529 5120 scope.go:117] "RemoveContainer" containerID="e94ae6f6e61790076393376b71522698c65d8d872bdfe197441f1ede23e779f0" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.378635 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5nvn" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.398296 5120 scope.go:117] "RemoveContainer" containerID="04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.412166 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.417507 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5nvn"] Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.437979 5120 scope.go:117] "RemoveContainer" containerID="f7fd7cbfe79a1adebb0cfbd3dc66028444cc6622806f14ca6c6694184f1c03cf" Jan 22 11:50:55 crc kubenswrapper[5120]: I0122 11:50:55.578558 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" path="/var/lib/kubelet/pods/5a52d1c0-c55c-47b4-936e-a783304a0e89/volumes" Jan 22 11:50:56 crc kubenswrapper[5120]: I0122 11:50:56.386378 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerStarted","Data":"d4c4f24e5c9a48752758f6dcf933d24a1e6486cd93edc80fe0fcd4be8d8e0255"} Jan 22 11:50:56 crc kubenswrapper[5120]: I0122 11:50:56.418463 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.418432547 podStartE2EDuration="2.418432547s" podCreationTimestamp="2026-01-22 11:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:50:56.412737349 +0000 UTC m=+191.156685690" watchObservedRunningTime="2026-01-22 11:50:56.418432547 +0000 UTC m=+191.162380888" Jan 22 11:50:56 crc kubenswrapper[5120]: E0122 11:50:56.903341 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.633005 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.683670 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.837384 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:58 crc kubenswrapper[5120]: I0122 11:50:58.877319 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:50:59 crc kubenswrapper[5120]: I0122 11:50:59.842917 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:51:00 crc kubenswrapper[5120]: I0122 11:51:00.381402 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:51:00 crc kubenswrapper[5120]: I0122 11:51:00.382513 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:51:00 crc kubenswrapper[5120]: I0122 11:51:00.416484 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tbgcq" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" containerID="cri-o://f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486" gracePeriod=2 Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.595523 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.637042 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.884450 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:01 crc kubenswrapper[5120]: I0122 11:51:01.938207 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:02 crc kubenswrapper[5120]: I0122 11:51:02.649512 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:51:02 crc kubenswrapper[5120]: I0122 11:51:02.650690 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p26dp" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" containerID="cri-o://a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2" gracePeriod=2 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.043640 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.043948 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbm7w" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" containerID="cri-o://f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2" gracePeriod=2 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.438998 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tbgcq_3e95505c-a7eb-4d9f-be2f-e7129e3643b8/registry-server/0.log" Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.440399 5120 generic.go:358] "Generic (PLEG): container finished" podID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerID="f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486" exitCode=137 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.440473 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486"} Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.442877 5120 generic.go:358] "Generic (PLEG): container finished" podID="089fc2c1-8274-4532-a14a-21194d01a310" containerID="a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2" exitCode=0 Jan 22 11:51:04 crc kubenswrapper[5120]: I0122 11:51:04.442913 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.001820 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.046450 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") pod \"089fc2c1-8274-4532-a14a-21194d01a310\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.046573 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") pod \"089fc2c1-8274-4532-a14a-21194d01a310\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.046693 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") pod \"089fc2c1-8274-4532-a14a-21194d01a310\" (UID: \"089fc2c1-8274-4532-a14a-21194d01a310\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.050196 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities" (OuterVolumeSpecName: "utilities") pod "089fc2c1-8274-4532-a14a-21194d01a310" (UID: "089fc2c1-8274-4532-a14a-21194d01a310"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.063110 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp" (OuterVolumeSpecName: "kube-api-access-gp5qp") pod "089fc2c1-8274-4532-a14a-21194d01a310" (UID: "089fc2c1-8274-4532-a14a-21194d01a310"). InnerVolumeSpecName "kube-api-access-gp5qp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.087483 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "089fc2c1-8274-4532-a14a-21194d01a310" (UID: "089fc2c1-8274-4532-a14a-21194d01a310"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.119772 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tbgcq_3e95505c-a7eb-4d9f-be2f-e7129e3643b8/registry-server/0.log" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.120585 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.149904 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") pod \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150005 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") pod \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150176 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") pod \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\" (UID: \"3e95505c-a7eb-4d9f-be2f-e7129e3643b8\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150477 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gp5qp\" (UniqueName: \"kubernetes.io/projected/089fc2c1-8274-4532-a14a-21194d01a310-kube-api-access-gp5qp\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150503 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.150514 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/089fc2c1-8274-4532-a14a-21194d01a310-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.151171 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities" (OuterVolumeSpecName: "utilities") pod "3e95505c-a7eb-4d9f-be2f-e7129e3643b8" (UID: "3e95505c-a7eb-4d9f-be2f-e7129e3643b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.157234 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj" (OuterVolumeSpecName: "kube-api-access-zqkzj") pod "3e95505c-a7eb-4d9f-be2f-e7129e3643b8" (UID: "3e95505c-a7eb-4d9f-be2f-e7129e3643b8"). InnerVolumeSpecName "kube-api-access-zqkzj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.210349 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e95505c-a7eb-4d9f-be2f-e7129e3643b8" (UID: "3e95505c-a7eb-4d9f-be2f-e7129e3643b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.253020 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.253523 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.253617 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zqkzj\" (UniqueName: \"kubernetes.io/projected/3e95505c-a7eb-4d9f-be2f-e7129e3643b8-kube-api-access-zqkzj\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.451589 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tbgcq_3e95505c-a7eb-4d9f-be2f-e7129e3643b8/registry-server/0.log" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.455695 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tbgcq" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.456058 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tbgcq" event={"ID":"3e95505c-a7eb-4d9f-be2f-e7129e3643b8","Type":"ContainerDied","Data":"d7e449df56d4aa55bd535980c4c65253f3325cde543e24f2634b3227e292a791"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.456340 5120 scope.go:117] "RemoveContainer" containerID="f4f7bc0583697b2f695f6f1c26c7ce5ff64e708099c05083dc3b1510e1605486" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.461900 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p26dp" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.461901 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p26dp" event={"ID":"089fc2c1-8274-4532-a14a-21194d01a310","Type":"ContainerDied","Data":"408feb4598d3b1d5ae322e87417dab316fa1b75c632f7ace01cbd6d89c0b3941"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.472223 5120 generic.go:358] "Generic (PLEG): container finished" podID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerID="f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2" exitCode=0 Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.472396 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2"} Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.475008 5120 scope.go:117] "RemoveContainer" containerID="c75872699b265f647f93429326d1a8652dfa1cbe0ac2767c1c24f307072383a1" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.499643 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.509276 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.511681 5120 scope.go:117] "RemoveContainer" containerID="7de27767f0a768c4d8be8f2a9463a108ad7455645c4ac170a6ce680c9ed560d4" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.512625 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tbgcq"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.549288 5120 scope.go:117] "RemoveContainer" containerID="a3a3097fd4339ce32794c09b0be56788819c79a81ede80e9fdec2115b13052f2" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.553579 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.556014 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p26dp"] Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.558662 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") pod \"fda19cab-4c2e-47a2-993c-ce6f3795e561\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.558779 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") pod \"fda19cab-4c2e-47a2-993c-ce6f3795e561\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.558882 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") pod \"fda19cab-4c2e-47a2-993c-ce6f3795e561\" (UID: \"fda19cab-4c2e-47a2-993c-ce6f3795e561\") " Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.559915 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities" (OuterVolumeSpecName: "utilities") pod "fda19cab-4c2e-47a2-993c-ce6f3795e561" (UID: "fda19cab-4c2e-47a2-993c-ce6f3795e561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.564977 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj" (OuterVolumeSpecName: "kube-api-access-lmxxj") pod "fda19cab-4c2e-47a2-993c-ce6f3795e561" (UID: "fda19cab-4c2e-47a2-993c-ce6f3795e561"). InnerVolumeSpecName "kube-api-access-lmxxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.566062 5120 scope.go:117] "RemoveContainer" containerID="9bc291a555447cad49a14283506bdb0035ead9ce2860615680f3af52e9dceda9" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.582128 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089fc2c1-8274-4532-a14a-21194d01a310" path="/var/lib/kubelet/pods/089fc2c1-8274-4532-a14a-21194d01a310/volumes" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.583135 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" path="/var/lib/kubelet/pods/3e95505c-a7eb-4d9f-be2f-e7129e3643b8/volumes" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.583771 5120 scope.go:117] "RemoveContainer" containerID="8c8add6d6346bffb920d193189f09708f0ce72391c85a3b8f9fe5d165b2e4b5d" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.662198 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmxxj\" (UniqueName: \"kubernetes.io/projected/fda19cab-4c2e-47a2-993c-ce6f3795e561-kube-api-access-lmxxj\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.662224 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.665340 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fda19cab-4c2e-47a2-993c-ce6f3795e561" (UID: "fda19cab-4c2e-47a2-993c-ce6f3795e561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:05 crc kubenswrapper[5120]: I0122 11:51:05.764272 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda19cab-4c2e-47a2-993c-ce6f3795e561-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.486143 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbm7w" event={"ID":"fda19cab-4c2e-47a2-993c-ce6f3795e561","Type":"ContainerDied","Data":"f088b06a5bed8fcb72cf992ec4dfa09770bed17e70fa6aa78bd0452016efb6e5"} Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.486617 5120 scope.go:117] "RemoveContainer" containerID="f06bad76aa0a0af81a23a0c7892445f4237f1858924bdaae4e0635ae65173fe2" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.486202 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbm7w" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.522928 5120 scope.go:117] "RemoveContainer" containerID="0f93aadd0112a21eacebe8630496cabe8f22f4bbdfd32043b156cba561df7b59" Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.527670 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.529797 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbm7w"] Jan 22 11:51:06 crc kubenswrapper[5120]: I0122 11:51:06.545377 5120 scope.go:117] "RemoveContainer" containerID="225b2e979aa1449106827d89e2af943939a02a67507731955126d01302822780" Jan 22 11:51:07 crc kubenswrapper[5120]: E0122 11:51:07.029030 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:07 crc kubenswrapper[5120]: I0122 11:51:07.595073 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" path="/var/lib/kubelet/pods/fda19cab-4c2e-47a2-993c-ce6f3795e561/volumes" Jan 22 11:51:17 crc kubenswrapper[5120]: E0122 11:51:17.141015 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:18 crc kubenswrapper[5120]: I0122 11:51:18.842027 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq"] Jan 22 11:51:22 crc kubenswrapper[5120]: I0122 11:51:22.183109 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35912: no serving certificate available for the kubelet" Jan 22 11:51:27 crc kubenswrapper[5120]: E0122 11:51:27.278725 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:31 crc kubenswrapper[5120]: I0122 11:51:31.973009 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:51:31 crc kubenswrapper[5120]: I0122 11:51:31.973638 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.527931 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529266 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529354 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529419 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529475 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529543 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529599 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529660 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529717 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529779 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529836 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.529896 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530021 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530086 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530143 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530207 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530264 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530320 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530379 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530435 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530491 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="extract-utilities" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530541 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530594 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="extract-content" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530679 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530738 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530901 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="089fc2c1-8274-4532-a14a-21194d01a310" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.530983 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a52d1c0-c55c-47b4-936e-a783304a0e89" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.531050 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e95505c-a7eb-4d9f-be2f-e7129e3643b8" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.531118 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="fda19cab-4c2e-47a2-993c-ce6f3795e561" containerName="registry-server" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.687317 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.687862 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.688884 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689106 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689149 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689195 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.689215 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" gracePeriod=15 Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.706188 5120 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707211 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707226 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707237 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707244 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707257 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707263 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707277 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707499 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707510 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707516 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707525 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707531 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707540 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707547 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707563 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707568 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707937 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707961 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707971 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707985 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707992 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.707998 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708004 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708013 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708126 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708137 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708244 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708358 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.708365 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.773561 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: E0122 11:51:33.774551 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782336 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782380 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782409 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782456 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782529 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782695 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782750 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782792 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.782882 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883806 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883854 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883877 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.883896 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884015 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884078 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884016 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884098 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884246 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884302 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884342 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884386 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884471 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884544 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884572 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884626 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884664 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.884875 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:33 crc kubenswrapper[5120]: I0122 11:51:33.885017 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.075336 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: E0122 11:51:34.102251 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0b565838ce2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,LastTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:51:34 crc kubenswrapper[5120]: E0122 11:51:34.625519 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0b565838ce2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,LastTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.685900 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4"} Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.685994 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"93b89363df3ac6ad673e5ae755b2fab3bc9dad346d982ed1e9e6e0b8559055f7"} Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.686401 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: E0122 11:51:34.687117 5120 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.688540 5120 generic.go:358] "Generic (PLEG): container finished" podID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerID="d4c4f24e5c9a48752758f6dcf933d24a1e6486cd93edc80fe0fcd4be8d8e0255" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.688653 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerDied","Data":"d4c4f24e5c9a48752758f6dcf933d24a1e6486cd93edc80fe0fcd4be8d8e0255"} Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.689695 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.690779 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692286 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692909 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692936 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692944 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" exitCode=0 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.692974 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" exitCode=2 Jan 22 11:51:34 crc kubenswrapper[5120]: I0122 11:51:34.693023 5120 scope.go:117] "RemoveContainer" containerID="99b634350c36056ac94a43bb1050fb0a41c21441966a10fdfe3aeae30cfd0c2f" Jan 22 11:51:35 crc kubenswrapper[5120]: I0122 11:51:35.574949 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:35 crc kubenswrapper[5120]: I0122 11:51:35.709488 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.038804 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.040072 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118107 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") pod \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118315 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f30ae543-bf57-4bbc-9c40-25ceab4603c6" (UID: "f30ae543-bf57-4bbc-9c40-25ceab4603c6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118464 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") pod \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118631 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") pod \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\" (UID: \"f30ae543-bf57-4bbc-9c40-25ceab4603c6\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118543 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock" (OuterVolumeSpecName: "var-lock") pod "f30ae543-bf57-4bbc-9c40-25ceab4603c6" (UID: "f30ae543-bf57-4bbc-9c40-25ceab4603c6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.118964 5120 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.119050 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f30ae543-bf57-4bbc-9c40-25ceab4603c6-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.126582 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f30ae543-bf57-4bbc-9c40-25ceab4603c6" (UID: "f30ae543-bf57-4bbc-9c40-25ceab4603c6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.220047 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f30ae543-bf57-4bbc-9c40-25ceab4603c6-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.558119 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.559087 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.559771 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.560225 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623097 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623179 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623213 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623270 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623351 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623400 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623397 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.623452 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624238 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624333 5120 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624356 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.624370 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.629725 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.722982 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.723657 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" exitCode=0 Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.723947 5120 scope.go:117] "RemoveContainer" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.724102 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.726081 5120 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.726111 5120 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.732182 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"f30ae543-bf57-4bbc-9c40-25ceab4603c6","Type":"ContainerDied","Data":"c47f56a7ba94352bdbc302b5089a5a57c1a67692d87e9c910901f243c667c377"} Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.732241 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47f56a7ba94352bdbc302b5089a5a57c1a67692d87e9c910901f243c667c377" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.732413 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.750413 5120 scope.go:117] "RemoveContainer" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.753168 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.753575 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.758540 5120 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.763649 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.768257 5120 scope.go:117] "RemoveContainer" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.786950 5120 scope.go:117] "RemoveContainer" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.801901 5120 scope.go:117] "RemoveContainer" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.825588 5120 scope.go:117] "RemoveContainer" containerID="8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.880987 5120 scope.go:117] "RemoveContainer" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.881506 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b\": container with ID starting with 79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b not found: ID does not exist" containerID="79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.881555 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b"} err="failed to get container status \"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b\": rpc error: code = NotFound desc = could not find container \"79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b\": container with ID starting with 79545e3bdfa141cbd330789b3726a926a352dee430ef750fa2a4adffc6f4f17b not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.881612 5120 scope.go:117] "RemoveContainer" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.882026 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\": container with ID starting with fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc not found: ID does not exist" containerID="fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882061 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc"} err="failed to get container status \"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\": rpc error: code = NotFound desc = could not find container \"fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc\": container with ID starting with fbc73082c8fc6e4c53f063e1d1446fff9c541a208f3ab11d7c687b5b06958ebc not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882074 5120 scope.go:117] "RemoveContainer" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.882394 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\": container with ID starting with 3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f not found: ID does not exist" containerID="3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882421 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f"} err="failed to get container status \"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\": rpc error: code = NotFound desc = could not find container \"3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f\": container with ID starting with 3ebb490a3adef5a0bb92ba36215125157bd696a19543743e029f6ef8d7ddaf9f not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882437 5120 scope.go:117] "RemoveContainer" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.882718 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\": container with ID starting with 64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1 not found: ID does not exist" containerID="64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882744 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1"} err="failed to get container status \"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\": rpc error: code = NotFound desc = could not find container \"64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1\": container with ID starting with 64d17043c5bd9fe7e126416520a376da7a3779ed00b20eb4d36e1651e0e4deb1 not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.882759 5120 scope.go:117] "RemoveContainer" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.883133 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\": container with ID starting with 911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c not found: ID does not exist" containerID="911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.883244 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c"} err="failed to get container status \"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\": rpc error: code = NotFound desc = could not find container \"911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c\": container with ID starting with 911cf90f454467de717e1f9bb20b825a5be262103e70d8507cf0069f6044f56c not found: ID does not exist" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.883287 5120 scope.go:117] "RemoveContainer" containerID="8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41" Jan 22 11:51:36 crc kubenswrapper[5120]: E0122 11:51:36.884028 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\": container with ID starting with 8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41 not found: ID does not exist" containerID="8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41" Jan 22 11:51:36 crc kubenswrapper[5120]: I0122 11:51:36.884062 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41"} err="failed to get container status \"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\": rpc error: code = NotFound desc = could not find container \"8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41\": container with ID starting with 8940990b4eeab47177be3a76a9fc4894d28308e94e4c45050915ec740b778a41 not found: ID does not exist" Jan 22 11:51:37 crc kubenswrapper[5120]: E0122 11:51:37.411998 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a52d1c0_c55c_47b4_936e_a783304a0e89.slice/crio-04e86588d8fba653a7e46769775e0363411492a2faa05c1b5793a39fc530062e.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:51:37 crc kubenswrapper[5120]: I0122 11:51:37.585124 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.854939 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.856067 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.856422 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.856738 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.857185 5120 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:40 crc kubenswrapper[5120]: I0122 11:51:40.857231 5120 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 11:51:40 crc kubenswrapper[5120]: E0122 11:51:40.857642 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Jan 22 11:51:41 crc kubenswrapper[5120]: E0122 11:51:41.059427 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Jan 22 11:51:41 crc kubenswrapper[5120]: E0122 11:51:41.460812 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Jan 22 11:51:42 crc kubenswrapper[5120]: E0122 11:51:42.262098 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Jan 22 11:51:43 crc kubenswrapper[5120]: E0122 11:51:43.863085 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Jan 22 11:51:43 crc kubenswrapper[5120]: I0122 11:51:43.896569 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" containerID="cri-o://1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" gracePeriod=15 Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.320390 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.321298 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.321810 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450162 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450271 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450356 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450457 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450501 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450581 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450647 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450781 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450763 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.450817 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.451745 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.451758 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.451142 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452318 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452314 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452382 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452687 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") pod \"bebd6777-9b90-4b62-a3a9-360290cb39a9\" (UID: \"bebd6777-9b90-4b62-a3a9-360290cb39a9\") " Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.452850 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453259 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453287 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453784 5120 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453807 5120 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.453827 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.461152 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.462428 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.462640 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.462667 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt" (OuterVolumeSpecName: "kube-api-access-dgrjt") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "kube-api-access-dgrjt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.464802 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.465169 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.465778 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.466006 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.472612 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bebd6777-9b90-4b62-a3a9-360290cb39a9" (UID: "bebd6777-9b90-4b62-a3a9-360290cb39a9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.555776 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556253 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556345 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556415 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556485 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dgrjt\" (UniqueName: \"kubernetes.io/projected/bebd6777-9b90-4b62-a3a9-360290cb39a9-kube-api-access-dgrjt\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556589 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556680 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556750 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.556810 5120 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bebd6777-9b90-4b62-a3a9-360290cb39a9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 11:51:44 crc kubenswrapper[5120]: E0122 11:51:44.627642 5120 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d0b565838ce2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,LastTimestamp:2026-01-22 11:51:34.101683756 +0000 UTC m=+228.845632137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796090 5120 generic.go:358] "Generic (PLEG): container finished" podID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" exitCode=0 Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796226 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796271 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerDied","Data":"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69"} Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796346 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" event={"ID":"bebd6777-9b90-4b62-a3a9-360290cb39a9","Type":"ContainerDied","Data":"743767c75fc8dbe2e21f07b80773fcf606c65fb144c9e4f33a6d600d11d2e9c8"} Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.796401 5120 scope.go:117] "RemoveContainer" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.797513 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.798490 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.821084 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.821847 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.837039 5120 scope.go:117] "RemoveContainer" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" Jan 22 11:51:44 crc kubenswrapper[5120]: E0122 11:51:44.837759 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69\": container with ID starting with 1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69 not found: ID does not exist" containerID="1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69" Jan 22 11:51:44 crc kubenswrapper[5120]: I0122 11:51:44.837851 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69"} err="failed to get container status \"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69\": rpc error: code = NotFound desc = could not find container \"1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69\": container with ID starting with 1970871bfcc664e7bd0d7d614acf5222d8586ea1979edd4618dd7138b6e81a69 not found: ID does not exist" Jan 22 11:51:45 crc kubenswrapper[5120]: I0122 11:51:45.579544 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:45 crc kubenswrapper[5120]: I0122 11:51:45.580616 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:47 crc kubenswrapper[5120]: E0122 11:51:47.065193 5120 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="6.4s" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.819506 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.819565 5120 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7" exitCode=1 Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.819744 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7"} Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.820793 5120 scope.go:117] "RemoveContainer" containerID="d8530587a7dacf7f1e414d966e228d915e25d07d268990a0cbd418ca534f37e7" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.821054 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.821338 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:47 crc kubenswrapper[5120]: I0122 11:51:47.821677 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.570840 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.572197 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.572668 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.573053 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.584052 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.584098 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:48 crc kubenswrapper[5120]: E0122 11:51:48.584626 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.586062 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:48 crc kubenswrapper[5120]: W0122 11:51:48.606640 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3 WatchSource:0}: Error finding container cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3: Status 404 returned error can't find the container with id cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3 Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.825581 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"cc561ee96b8a1542758bfcf01be3a85c24edc50a3487120817da20885acf41a3"} Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.828329 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.828408 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3ef6755048cc0fe7514752d596373386336135c1ba58aff51a2e461dc885948a"} Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.829778 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.829949 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:48 crc kubenswrapper[5120]: I0122 11:51:48.830135 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838341 5120 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="96218f764b310f89071e3f04e8558cb34a8b29869c9c379c60ba16ecec9042cd" exitCode=0 Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838447 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"96218f764b310f89071e3f04e8558cb34a8b29869c9c379c60ba16ecec9042cd"} Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838687 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.838829 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:49 crc kubenswrapper[5120]: E0122 11:51:49.839195 5120 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.840607 5120 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.841511 5120 status_manager.go:895] "Failed to get status for pod" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" pod="openshift-authentication/oauth-openshift-66458b6674-25dsq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-25dsq\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:49 crc kubenswrapper[5120]: I0122 11:51:49.841951 5120 status_manager.go:895] "Failed to get status for pod" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.36:6443: connect: connection refused" Jan 22 11:51:50 crc kubenswrapper[5120]: I0122 11:51:50.847806 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6ebad79612c4c7fa4d607ff9cce803f48601be83abb186a55e5c558549c3166b"} Jan 22 11:51:50 crc kubenswrapper[5120]: I0122 11:51:50.848190 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"e0f19d8e763a99e493269d5d958da4f8368d1fce51a1c596d8605b4bfd7f7f57"} Jan 22 11:51:50 crc kubenswrapper[5120]: I0122 11:51:50.848201 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"67488d9f7d77d47e57f5c6b52c8e91a033d7fa0e6d519d8082c5f2c87b11397f"} Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.739281 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.751472 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.856689 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d9a496b4bab60873fc28ca7402b37b731300f0df000a573ef929311e699429f4"} Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.856762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"71b962cc8ada4fe61e821258d5ea7098651ad03533bca91eeacc32f2d01336fe"} Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.857044 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.857371 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:51 crc kubenswrapper[5120]: I0122 11:51:51.857401 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:53 crc kubenswrapper[5120]: I0122 11:51:53.586173 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:53 crc kubenswrapper[5120]: I0122 11:51:53.586234 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:53 crc kubenswrapper[5120]: I0122 11:51:53.593500 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:56 crc kubenswrapper[5120]: I0122 11:51:56.872477 5120 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:56 crc kubenswrapper[5120]: I0122 11:51:56.873013 5120 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:56 crc kubenswrapper[5120]: I0122 11:51:56.939080 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="804b3ebe-5124-4e95-baf7-1b1e38ed753c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.892754 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.893266 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.892752 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.897820 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="804b3ebe-5124-4e95-baf7-1b1e38ed753c" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.899252 5120 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://67488d9f7d77d47e57f5c6b52c8e91a033d7fa0e6d519d8082c5f2c87b11397f" Jan 22 11:51:57 crc kubenswrapper[5120]: I0122 11:51:57.899275 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:51:58 crc kubenswrapper[5120]: I0122 11:51:58.896817 5120 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:58 crc kubenswrapper[5120]: I0122 11:51:58.896849 5120 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="410ef417-8c38-4aac-9a75-c1a938b0cf8c" Jan 22 11:51:58 crc kubenswrapper[5120]: I0122 11:51:58.900754 5120 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="804b3ebe-5124-4e95-baf7-1b1e38ed753c" Jan 22 11:52:01 crc kubenswrapper[5120]: I0122 11:52:01.972814 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:01 crc kubenswrapper[5120]: I0122 11:52:01.973359 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:52:02 crc kubenswrapper[5120]: I0122 11:52:02.872688 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.353894 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.450925 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.481641 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 22 11:52:07 crc kubenswrapper[5120]: I0122 11:52:07.781699 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.214647 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.243801 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.467089 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.560288 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.596675 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.712169 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.805206 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 22 11:52:08 crc kubenswrapper[5120]: I0122 11:52:08.896738 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.219017 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.223301 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.584900 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.602244 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.723752 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.768126 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.828065 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 22 11:52:09 crc kubenswrapper[5120]: I0122 11:52:09.868916 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.088315 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.184409 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.285119 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.324697 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.357648 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.376000 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.376204 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.391226 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.443169 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.451238 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.524202 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.630832 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.673713 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.778669 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.813126 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.870439 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.873356 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.941102 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.943502 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.982587 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.989279 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:10 crc kubenswrapper[5120]: I0122 11:52:10.995437 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.091551 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.105343 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.126665 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.128202 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.335226 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.408005 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.478132 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.514760 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.667332 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.668010 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.724608 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.729836 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.739543 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.822821 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.911616 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 22 11:52:11 crc kubenswrapper[5120]: I0122 11:52:11.934896 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.077455 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.124535 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.145031 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.154790 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.174850 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.229455 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.285106 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.396853 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.492447 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.535167 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.559535 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.613611 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.732560 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.814042 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 22 11:52:12 crc kubenswrapper[5120]: I0122 11:52:12.939619 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.091068 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.185887 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.222944 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.239876 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.248724 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.270595 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.278503 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.324274 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.366983 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.496470 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.564138 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.579839 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.599765 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.608240 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.684784 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.733345 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.793831 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.819000 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.859269 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.922290 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 22 11:52:13 crc kubenswrapper[5120]: I0122 11:52:13.995685 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.006681 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.122548 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.133654 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.179758 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.181132 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.269098 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.276406 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.439893 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.483627 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.541583 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.614743 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.671353 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.697059 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.766511 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.801951 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.839836 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.864842 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.899029 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.916899 5120 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.922903 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-25dsq","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.923022 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.931364 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.950872 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.955250 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.955227615 podStartE2EDuration="18.955227615s" podCreationTimestamp="2026-01-22 11:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:52:14.951169534 +0000 UTC m=+269.695117885" watchObservedRunningTime="2026-01-22 11:52:14.955227615 +0000 UTC m=+269.699176006" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.965383 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 22 11:52:14 crc kubenswrapper[5120]: I0122 11:52:14.980924 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.028633 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.137366 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.160172 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-859f9fbf8c-djk86"] Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161293 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161330 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161361 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerName="installer" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161374 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerName="installer" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161645 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f30ae543-bf57-4bbc-9c40-25ceab4603c6" containerName="installer" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.161671 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" containerName="oauth-openshift" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.186872 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.190091 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.190421 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.193397 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.193899 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.195047 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.197372 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.197671 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.200979 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201041 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201289 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201535 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201612 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.201873 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.205260 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.211411 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.270860 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.300307 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.300769 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-service-ca\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301041 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301225 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-error\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301390 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-router-certs\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301543 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301709 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-session\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.301901 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-dir\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302093 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-policies\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302244 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302406 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgfn\" (UniqueName: \"kubernetes.io/projected/964936ed-c6ba-45f2-9ccd-871c228a1383-kube-api-access-jjgfn\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302555 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-login\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302700 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.302842 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.356472 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.376141 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.404579 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-session\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405116 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-dir\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405233 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-policies\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405273 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405323 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-dir\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405413 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgfn\" (UniqueName: \"kubernetes.io/projected/964936ed-c6ba-45f2-9ccd-871c228a1383-kube-api-access-jjgfn\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-login\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405556 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.405625 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406186 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406335 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-service-ca\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406495 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-error\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406587 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-router-certs\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406644 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406845 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-audit-policies\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.406862 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.407660 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-cliconfig\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.408174 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-service-ca\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.415184 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-login\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.415220 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-router-certs\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.415873 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-session\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.417511 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-serving-cert\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.417795 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.420557 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.421676 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-template-error\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.427377 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/964936ed-c6ba-45f2-9ccd-871c228a1383-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.438736 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgfn\" (UniqueName: \"kubernetes.io/projected/964936ed-c6ba-45f2-9ccd-871c228a1383-kube-api-access-jjgfn\") pod \"oauth-openshift-859f9fbf8c-djk86\" (UID: \"964936ed-c6ba-45f2-9ccd-871c228a1383\") " pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.442851 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.502381 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.564734 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.586895 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.596532 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebd6777-9b90-4b62-a3a9-360290cb39a9" path="/var/lib/kubelet/pods/bebd6777-9b90-4b62-a3a9-360290cb39a9/volumes" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.623643 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.634876 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.676538 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.723811 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.735079 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.839029 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 22 11:52:15 crc kubenswrapper[5120]: I0122 11:52:15.978129 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.097408 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.188176 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.376699 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.414211 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.495720 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.517503 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.527772 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.564262 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.572430 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.650579 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.707618 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.840236 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.872877 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.886916 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.938569 5120 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:16 crc kubenswrapper[5120]: I0122 11:52:16.979615 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.003487 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.046470 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.119888 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.152570 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.153770 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.251290 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.350895 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.375155 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.381203 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.401468 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.459414 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.519820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.529034 5120 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.703323 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.763355 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.838146 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.870438 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.948903 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:17 crc kubenswrapper[5120]: I0122 11:52:17.949435 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.106444 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.179163 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.202328 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.254716 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.285914 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.287460 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.542482 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.543761 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.602683 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.732317 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.772262 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.789348 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.831841 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.832077 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.914084 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:18 crc kubenswrapper[5120]: I0122 11:52:18.965395 5120 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.171364 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.407818 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.546386 5120 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.546839 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4" gracePeriod=5 Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.589305 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.743312 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.773452 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.835829 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.843550 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.853876 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.861872 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.952844 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 22 11:52:19 crc kubenswrapper[5120]: I0122 11:52:19.956406 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.128107 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.249711 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.253946 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.480820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.499529 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.594213 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.973747 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:20 crc kubenswrapper[5120]: I0122 11:52:20.994856 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.024413 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.030914 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.074743 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.079876 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.178889 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.182620 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.268436 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.487192 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.532016 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.534863 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.561768 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.571398 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.811640 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.913068 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.955555 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 22 11:52:21 crc kubenswrapper[5120]: I0122 11:52:21.971905 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.001270 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.029494 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.149093 5120 ???:1] "http: TLS handshake error from 192.168.126.11:35886: no serving certificate available for the kubelet" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.192581 5120 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.337633 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.384636 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 22 11:52:22 crc kubenswrapper[5120]: I0122 11:52:22.432076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 22 11:52:23 crc kubenswrapper[5120]: I0122 11:52:23.008027 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 22 11:52:23 crc kubenswrapper[5120]: I0122 11:52:23.423059 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 22 11:52:23 crc kubenswrapper[5120]: I0122 11:52:23.462668 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.067882 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.068266 5120 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4" exitCode=137 Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.128567 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.128738 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.130910 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.153841 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.153913 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154150 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154138 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154179 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154254 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154279 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154308 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.154373 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155010 5120 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155044 5120 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155063 5120 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.155075 5120 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.167535 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.256326 5120 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.579532 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:25 crc kubenswrapper[5120]: I0122 11:52:25.582597 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.075183 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.075379 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.075399 5120 scope.go:117] "RemoveContainer" containerID="a8463e343cc5ae2c432dc371c37cafeb5cfd870e6bf3b62821dbcd1658194ee4" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.076894 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:26 crc kubenswrapper[5120]: I0122 11:52:26.082050 5120 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.496079 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.972946 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973059 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973113 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973774 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:52:31 crc kubenswrapper[5120]: I0122 11:52:31.973836 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24" gracePeriod=600 Jan 22 11:52:33 crc kubenswrapper[5120]: I0122 11:52:33.130572 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24" exitCode=0 Jan 22 11:52:33 crc kubenswrapper[5120]: I0122 11:52:33.130647 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24"} Jan 22 11:52:33 crc kubenswrapper[5120]: I0122 11:52:33.131473 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10"} Jan 22 11:52:36 crc kubenswrapper[5120]: I0122 11:52:36.808799 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.167798 5120 generic.go:358] "Generic (PLEG): container finished" podID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" exitCode=0 Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.167864 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerDied","Data":"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22"} Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.168888 5120 scope.go:117] "RemoveContainer" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:52:39 crc kubenswrapper[5120]: I0122 11:52:39.875748 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:52:40 crc kubenswrapper[5120]: I0122 11:52:40.177141 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerStarted","Data":"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30"} Jan 22 11:52:40 crc kubenswrapper[5120]: I0122 11:52:40.177536 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:52:40 crc kubenswrapper[5120]: I0122 11:52:40.180385 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:52:41 crc kubenswrapper[5120]: I0122 11:52:41.769340 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 22 11:52:43 crc kubenswrapper[5120]: I0122 11:52:43.225685 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 22 11:52:44 crc kubenswrapper[5120]: I0122 11:52:44.140893 5120 ???:1] "http: TLS handshake error from 192.168.126.11:43860: no serving certificate available for the kubelet" Jan 22 11:52:45 crc kubenswrapper[5120]: I0122 11:52:45.495169 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 22 11:52:45 crc kubenswrapper[5120]: I0122 11:52:45.759373 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:52:45 crc kubenswrapper[5120]: I0122 11:52:45.761381 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:52:48 crc kubenswrapper[5120]: I0122 11:52:48.070393 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 22 11:52:48 crc kubenswrapper[5120]: I0122 11:52:48.477502 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 22 11:52:48 crc kubenswrapper[5120]: I0122 11:52:48.623592 5120 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.157134 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.180790 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.336859 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:52:53 crc kubenswrapper[5120]: I0122 11:52:53.817484 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-859f9fbf8c-djk86"] Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.008364 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.026743 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-859f9fbf8c-djk86"] Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.032232 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:52:54 crc kubenswrapper[5120]: I0122 11:52:54.281896 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" event={"ID":"964936ed-c6ba-45f2-9ccd-871c228a1383","Type":"ContainerStarted","Data":"fc4cee474f1ff19682c4f444f2fabd3665b45c2128dfba20159e306ed490cf50"} Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.290547 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" event={"ID":"964936ed-c6ba-45f2-9ccd-871c228a1383","Type":"ContainerStarted","Data":"f3cddea63f64dea9bbc8955882e5983e7e468173d44d256dd0e0dd293dd54ccb"} Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.291069 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.299257 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" Jan 22 11:52:55 crc kubenswrapper[5120]: I0122 11:52:55.319259 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-859f9fbf8c-djk86" podStartSLOduration=97.319239968 podStartE2EDuration="1m37.319239968s" podCreationTimestamp="2026-01-22 11:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:52:55.314099559 +0000 UTC m=+310.058047900" watchObservedRunningTime="2026-01-22 11:52:55.319239968 +0000 UTC m=+310.063188309" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.420541 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.421047 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" containerID="cri-o://0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" gracePeriod=30 Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.439640 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.440777 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" containerID="cri-o://5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" gracePeriod=30 Jan 22 11:52:58 crc kubenswrapper[5120]: E0122 11:52:58.499165 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode36a1cae_0915_45b1_abf9_2f44c78f3306.slice/crio-5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb.scope\": RecentStats: unable to find data in memory cache]" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.855082 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.889731 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890349 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890366 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890388 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890396 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890502 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerName="controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.890515 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.896005 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.905088 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.920000 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.965296 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.966169 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.966194 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.966304 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.971670 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.971944 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972192 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972243 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972327 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972365 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") pod \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\" (UID: \"007c14e3-9fa4-44aa-8d05-a57c4dc222a1\") " Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972517 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972615 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972765 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972772 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972809 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972879 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config" (OuterVolumeSpecName: "config") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.972886 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973060 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp" (OuterVolumeSpecName: "tmp") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973372 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973554 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973670 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973693 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.973706 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.974240 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.974423 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.979709 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:52:58 crc kubenswrapper[5120]: I0122 11:52:58.979898 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr" (OuterVolumeSpecName: "kube-api-access-kzcsr") pod "007c14e3-9fa4-44aa-8d05-a57c4dc222a1" (UID: "007c14e3-9fa4-44aa-8d05-a57c4dc222a1"). InnerVolumeSpecName "kube-api-access-kzcsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075178 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075235 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075277 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075425 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075473 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") pod \"e36a1cae-0915-45b1-abf9-2f44c78f3306\" (UID: \"e36a1cae-0915-45b1-abf9-2f44c78f3306\") " Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075575 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075610 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075658 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075683 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075710 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075740 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075778 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075798 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075887 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075912 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.075984 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzcsr\" (UniqueName: \"kubernetes.io/projected/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-kube-api-access-kzcsr\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.076000 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.076013 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007c14e3-9fa4-44aa-8d05-a57c4dc222a1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.077172 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.077253 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.077719 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp" (OuterVolumeSpecName: "tmp") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078296 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078368 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca" (OuterVolumeSpecName: "client-ca") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078404 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config" (OuterVolumeSpecName: "config") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.078693 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.094799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr" (OuterVolumeSpecName: "kube-api-access-wjndr") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "kube-api-access-wjndr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.095217 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e36a1cae-0915-45b1-abf9-2f44c78f3306" (UID: "e36a1cae-0915-45b1-abf9-2f44c78f3306"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.095326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.104515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"controller-manager-dfd68485-lpx9q\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177298 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177560 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177715 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177822 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.177941 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178088 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e36a1cae-0915-45b1-abf9-2f44c78f3306-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178157 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wjndr\" (UniqueName: \"kubernetes.io/projected/e36a1cae-0915-45b1-abf9-2f44c78f3306-kube-api-access-wjndr\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178235 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e36a1cae-0915-45b1-abf9-2f44c78f3306-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178323 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178405 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e36a1cae-0915-45b1-abf9-2f44c78f3306-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178342 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178562 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.178776 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.181421 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.194899 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"route-controller-manager-5c6c48458c-zs5ct\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.228836 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.291779 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.321293 5120 generic.go:358] "Generic (PLEG): container finished" podID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" exitCode=0 Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.321905 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerDied","Data":"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.322139 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" event={"ID":"007c14e3-9fa4-44aa-8d05-a57c4dc222a1","Type":"ContainerDied","Data":"b06d71ff154da6cdba043abe6374515e955691a895c872e8885cdaf9984417d0"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.322161 5120 scope.go:117] "RemoveContainer" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.322382 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-xw8v9" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329117 5120 generic.go:358] "Generic (PLEG): container finished" podID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" exitCode=0 Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329270 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerDied","Data":"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329302 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.329343 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" event={"ID":"e36a1cae-0915-45b1-abf9-2f44c78f3306","Type":"ContainerDied","Data":"2d59b64b6f345357f2908b0217e759f74cb8c56e84767dbef6ac59043f972d83"} Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.368647 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.380493 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-xw8v9"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.381039 5120 scope.go:117] "RemoveContainer" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" Jan 22 11:52:59 crc kubenswrapper[5120]: E0122 11:52:59.381848 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761\": container with ID starting with 0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761 not found: ID does not exist" containerID="0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.381913 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761"} err="failed to get container status \"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761\": rpc error: code = NotFound desc = could not find container \"0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761\": container with ID starting with 0d2967cf10b1c44b4095ca653bbf386f8d585bd4d3078507706744e938981761 not found: ID does not exist" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.382078 5120 scope.go:117] "RemoveContainer" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.385797 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.393544 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.401452 5120 scope.go:117] "RemoveContainer" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" Jan 22 11:52:59 crc kubenswrapper[5120]: E0122 11:52:59.403608 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb\": container with ID starting with 5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb not found: ID does not exist" containerID="5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.403658 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb"} err="failed to get container status \"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb\": rpc error: code = NotFound desc = could not find container \"5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb\": container with ID starting with 5e977e10172d967f197ee04cf8a94ca2d54059ca15c4d92be05592d36a35cddb not found: ID does not exist" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.439351 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.539061 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:52:59 crc kubenswrapper[5120]: W0122 11:52:59.546190 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f53da5_8baf_4c45_8bb7_cf3fce499981.slice/crio-421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c WatchSource:0}: Error finding container 421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c: Status 404 returned error can't find the container with id 421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.580106 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="007c14e3-9fa4-44aa-8d05-a57c4dc222a1" path="/var/lib/kubelet/pods/007c14e3-9fa4-44aa-8d05-a57c4dc222a1/volumes" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.581014 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" path="/var/lib/kubelet/pods/e36a1cae-0915-45b1-abf9-2f44c78f3306/volumes" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.701082 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.773259 5120 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-fzgnb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded" start-of-body= Jan 22 11:52:59 crc kubenswrapper[5120]: I0122 11:52:59.773381 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-fzgnb" podUID="e36a1cae-0915-45b1-abf9-2f44c78f3306" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.339839 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerStarted","Data":"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.340020 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerStarted","Data":"421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.340276 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.344688 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerStarted","Data":"939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.345348 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerStarted","Data":"00927548cf3ab5834622397c44d482db6c7268747d537a2305987359dc9ec861"} Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.347043 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.365516 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.382270 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" podStartSLOduration=2.382245675 podStartE2EDuration="2.382245675s" podCreationTimestamp="2026-01-22 11:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:00.380122793 +0000 UTC m=+315.124071134" watchObservedRunningTime="2026-01-22 11:53:00.382245675 +0000 UTC m=+315.126194056" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.415583 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" podStartSLOduration=2.415568908 podStartE2EDuration="2.415568908s" podCreationTimestamp="2026-01-22 11:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:00.412653285 +0000 UTC m=+315.156601626" watchObservedRunningTime="2026-01-22 11:53:00.415568908 +0000 UTC m=+315.159517249" Jan 22 11:53:00 crc kubenswrapper[5120]: I0122 11:53:00.673492 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.432613 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.433715 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" containerID="cri-o://d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" gracePeriod=30 Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.878919 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.904496 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs"] Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.905195 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.905214 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.905310 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerName="route-controller-manager" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.912196 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.919494 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs"] Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958526 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958561 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958616 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.958687 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") pod \"70f53da5-8baf-4c45-8bb7-cf3fce499981\" (UID: \"70f53da5-8baf-4c45-8bb7-cf3fce499981\") " Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.960312 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp" (OuterVolumeSpecName: "tmp") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.961008 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config" (OuterVolumeSpecName: "config") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.964574 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp" (OuterVolumeSpecName: "kube-api-access-jrdtp") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "kube-api-access-jrdtp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.965332 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:53:18 crc kubenswrapper[5120]: I0122 11:53:18.965785 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca" (OuterVolumeSpecName: "client-ca") pod "70f53da5-8baf-4c45-8bb7-cf3fce499981" (UID: "70f53da5-8baf-4c45-8bb7-cf3fce499981"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060466 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-config\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060539 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f26ed13e-d255-473f-ad8e-d3511aa1e179-tmp\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060637 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f26ed13e-d255-473f-ad8e-d3511aa1e179-serving-cert\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060778 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhml\" (UniqueName: \"kubernetes.io/projected/f26ed13e-d255-473f-ad8e-d3511aa1e179-kube-api-access-hzhml\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.060803 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-client-ca\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061042 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061063 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/70f53da5-8baf-4c45-8bb7-cf3fce499981-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061073 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70f53da5-8baf-4c45-8bb7-cf3fce499981-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061085 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70f53da5-8baf-4c45-8bb7-cf3fce499981-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.061096 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrdtp\" (UniqueName: \"kubernetes.io/projected/70f53da5-8baf-4c45-8bb7-cf3fce499981-kube-api-access-jrdtp\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-config\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162197 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f26ed13e-d255-473f-ad8e-d3511aa1e179-tmp\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162215 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f26ed13e-d255-473f-ad8e-d3511aa1e179-serving-cert\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.162748 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f26ed13e-d255-473f-ad8e-d3511aa1e179-tmp\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.163160 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhml\" (UniqueName: \"kubernetes.io/projected/f26ed13e-d255-473f-ad8e-d3511aa1e179-kube-api-access-hzhml\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.163248 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-client-ca\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.163558 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-config\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.164069 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f26ed13e-d255-473f-ad8e-d3511aa1e179-client-ca\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.166866 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f26ed13e-d255-473f-ad8e-d3511aa1e179-serving-cert\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.180815 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhml\" (UniqueName: \"kubernetes.io/projected/f26ed13e-d255-473f-ad8e-d3511aa1e179-kube-api-access-hzhml\") pod \"route-controller-manager-788bc8974d-jc6gs\" (UID: \"f26ed13e-d255-473f-ad8e-d3511aa1e179\") " pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.227677 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489642 5120 generic.go:358] "Generic (PLEG): container finished" podID="70f53da5-8baf-4c45-8bb7-cf3fce499981" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" exitCode=0 Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489768 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerDied","Data":"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a"} Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489796 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" event={"ID":"70f53da5-8baf-4c45-8bb7-cf3fce499981","Type":"ContainerDied","Data":"421d4a2c35c82972597fabfeafbb19f6b05f0660346cc32082e5cec7bbf4da1c"} Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489814 5120 scope.go:117] "RemoveContainer" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.489987 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.513240 5120 scope.go:117] "RemoveContainer" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" Jan 22 11:53:19 crc kubenswrapper[5120]: E0122 11:53:19.513802 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a\": container with ID starting with d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a not found: ID does not exist" containerID="d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.513882 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a"} err="failed to get container status \"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a\": rpc error: code = NotFound desc = could not find container \"d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a\": container with ID starting with d58efaf2885f6e1810bd75ff4f3173e05c971c3724a53583f83eef98ffa75d7a not found: ID does not exist" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.531101 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.536301 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-zs5ct"] Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.580193 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f53da5-8baf-4c45-8bb7-cf3fce499981" path="/var/lib/kubelet/pods/70f53da5-8baf-4c45-8bb7-cf3fce499981/volumes" Jan 22 11:53:19 crc kubenswrapper[5120]: I0122 11:53:19.654311 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs"] Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.498559 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" event={"ID":"f26ed13e-d255-473f-ad8e-d3511aa1e179","Type":"ContainerStarted","Data":"7eb85acce96453925d13155b248ebd46029bb3bd270dac5b96c63174c6559fde"} Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.498603 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" event={"ID":"f26ed13e-d255-473f-ad8e-d3511aa1e179","Type":"ContainerStarted","Data":"5a872ee04ef0b173cb3e82914dad12c55dc5abe3540ea805de56604227235028"} Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.500092 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.506441 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" Jan 22 11:53:20 crc kubenswrapper[5120]: I0122 11:53:20.518526 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-788bc8974d-jc6gs" podStartSLOduration=2.518507843 podStartE2EDuration="2.518507843s" podCreationTimestamp="2026-01-22 11:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:53:20.515046505 +0000 UTC m=+335.258994866" watchObservedRunningTime="2026-01-22 11:53:20.518507843 +0000 UTC m=+335.262456184" Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.414192 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.415019 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" containerID="cri-o://939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215" gracePeriod=30 Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.750897 5120 generic.go:358] "Generic (PLEG): container finished" podID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerID="939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215" exitCode=0 Jan 22 11:53:58 crc kubenswrapper[5120]: I0122 11:53:58.750987 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerDied","Data":"939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215"} Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.075397 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110679 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110741 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110805 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110891 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110927 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.110983 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") pod \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\" (UID: \"2d98257c-df7b-48f7-b8c0-358847c5b9ce\") " Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.111678 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112405 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp" (OuterVolumeSpecName: "tmp") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112500 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112530 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112651 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" containerName="controller-manager" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112799 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config" (OuterVolumeSpecName: "config") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112881 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.112914 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.120322 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.120365 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l" (OuterVolumeSpecName: "kube-api-access-cgd8l") pod "2d98257c-df7b-48f7-b8c0-358847c5b9ce" (UID: "2d98257c-df7b-48f7-b8c0-358847c5b9ce"). InnerVolumeSpecName "kube-api-access-cgd8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.122216 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.133810 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212077 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8829h\" (UniqueName: \"kubernetes.io/projected/067ebda6-cb91-41fc-8767-fc2db64a4b9d-kube-api-access-8829h\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212364 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-client-ca\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212453 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-config\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212529 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067ebda6-cb91-41fc-8767-fc2db64a4b9d-serving-cert\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-proxy-ca-bundles\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/067ebda6-cb91-41fc-8767-fc2db64a4b9d-tmp\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212851 5120 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d98257c-df7b-48f7-b8c0-358847c5b9ce-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.212950 5120 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213059 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2d98257c-df7b-48f7-b8c0-358847c5b9ce-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213135 5120 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213212 5120 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d98257c-df7b-48f7-b8c0-358847c5b9ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.213275 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cgd8l\" (UniqueName: \"kubernetes.io/projected/2d98257c-df7b-48f7-b8c0-358847c5b9ce-kube-api-access-cgd8l\") on node \"crc\" DevicePath \"\"" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320406 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-proxy-ca-bundles\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/067ebda6-cb91-41fc-8767-fc2db64a4b9d-tmp\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320579 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8829h\" (UniqueName: \"kubernetes.io/projected/067ebda6-cb91-41fc-8767-fc2db64a4b9d-kube-api-access-8829h\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320615 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-client-ca\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320650 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-config\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.320671 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067ebda6-cb91-41fc-8767-fc2db64a4b9d-serving-cert\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.322144 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-proxy-ca-bundles\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.322362 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-client-ca\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.322756 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/067ebda6-cb91-41fc-8767-fc2db64a4b9d-tmp\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.323343 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/067ebda6-cb91-41fc-8767-fc2db64a4b9d-config\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.334911 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/067ebda6-cb91-41fc-8767-fc2db64a4b9d-serving-cert\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.343703 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8829h\" (UniqueName: \"kubernetes.io/projected/067ebda6-cb91-41fc-8767-fc2db64a4b9d-kube-api-access-8829h\") pod \"controller-manager-5f9bcd899c-m6rqg\" (UID: \"067ebda6-cb91-41fc-8767-fc2db64a4b9d\") " pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.468938 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.728058 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.759141 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" event={"ID":"067ebda6-cb91-41fc-8767-fc2db64a4b9d","Type":"ContainerStarted","Data":"e151023485daa0f2203dc72b463333e7a9e361094dcecb2ccb635ef072777c68"} Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.760743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" event={"ID":"2d98257c-df7b-48f7-b8c0-358847c5b9ce","Type":"ContainerDied","Data":"00927548cf3ab5834622397c44d482db6c7268747d537a2305987359dc9ec861"} Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.760797 5120 scope.go:117] "RemoveContainer" containerID="939b222f75022a729ec8f3d4c9a5b63dd9361453fb29d64c7c33225556190215" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.760850 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-lpx9q" Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.798911 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:53:59 crc kubenswrapper[5120]: I0122 11:53:59.802083 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-lpx9q"] Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.008162 5120 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.767380 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" event={"ID":"067ebda6-cb91-41fc-8767-fc2db64a4b9d","Type":"ContainerStarted","Data":"79cf6e9bf1240a7859af4637d9bf77fda5cc5d5ba12c513dc41da5fda2af2411"} Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.767697 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.774705 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" Jan 22 11:54:00 crc kubenswrapper[5120]: I0122 11:54:00.792840 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f9bcd899c-m6rqg" podStartSLOduration=2.792796332 podStartE2EDuration="2.792796332s" podCreationTimestamp="2026-01-22 11:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:54:00.790673189 +0000 UTC m=+375.534621550" watchObservedRunningTime="2026-01-22 11:54:00.792796332 +0000 UTC m=+375.536744683" Jan 22 11:54:01 crc kubenswrapper[5120]: I0122 11:54:01.579665 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d98257c-df7b-48f7-b8c0-358847c5b9ce" path="/var/lib/kubelet/pods/2d98257c-df7b-48f7-b8c0-358847c5b9ce/volumes" Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.968944 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.969929 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fztfm" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" containerID="cri-o://bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" gracePeriod=30 Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.994372 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:54:13 crc kubenswrapper[5120]: I0122 11:54:13.995153 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2q8d8" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" containerID="cri-o://36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.014449 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.014837 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" containerID="cri-o://c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.028400 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.029025 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rp8qf" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" containerID="cri-o://043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.041899 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.042377 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t67f7" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" containerID="cri-o://0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" gracePeriod=30 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.049626 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nzw8g"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.066124 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nzw8g"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.066352 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153822 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153864 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669v5\" (UniqueName: \"kubernetes.io/projected/abdba773-b95f-4d73-bcb5-d36526f8e13d-kube-api-access-669v5\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153920 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abdba773-b95f-4d73-bcb5-d36526f8e13d-tmp\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.153979 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.255933 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abdba773-b95f-4d73-bcb5-d36526f8e13d-tmp\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256525 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256592 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256610 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-669v5\" (UniqueName: \"kubernetes.io/projected/abdba773-b95f-4d73-bcb5-d36526f8e13d-kube-api-access-669v5\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.256931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/abdba773-b95f-4d73-bcb5-d36526f8e13d-tmp\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.259354 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.266149 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/abdba773-b95f-4d73-bcb5-d36526f8e13d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.285030 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-669v5\" (UniqueName: \"kubernetes.io/projected/abdba773-b95f-4d73-bcb5-d36526f8e13d-kube-api-access-669v5\") pod \"marketplace-operator-547dbd544d-nzw8g\" (UID: \"abdba773-b95f-4d73-bcb5-d36526f8e13d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.502394 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.518546 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.548583 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560644 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") pod \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560725 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") pod \"4f669e70-10cd-47da-abc9-84be80cb5cfb\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560797 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") pod \"4f669e70-10cd-47da-abc9-84be80cb5cfb\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560832 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") pod \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560903 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") pod \"4f669e70-10cd-47da-abc9-84be80cb5cfb\" (UID: \"4f669e70-10cd-47da-abc9-84be80cb5cfb\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.560974 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") pod \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\" (UID: \"ed489f01-1188-4d6f-9ed4-9618fddf1eab\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.563133 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities" (OuterVolumeSpecName: "utilities") pod "4f669e70-10cd-47da-abc9-84be80cb5cfb" (UID: "4f669e70-10cd-47da-abc9-84be80cb5cfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.565965 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt" (OuterVolumeSpecName: "kube-api-access-vctvt") pod "4f669e70-10cd-47da-abc9-84be80cb5cfb" (UID: "4f669e70-10cd-47da-abc9-84be80cb5cfb"). InnerVolumeSpecName "kube-api-access-vctvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.570606 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities" (OuterVolumeSpecName: "utilities") pod "ed489f01-1188-4d6f-9ed4-9618fddf1eab" (UID: "ed489f01-1188-4d6f-9ed4-9618fddf1eab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.580159 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht" (OuterVolumeSpecName: "kube-api-access-gztht") pod "ed489f01-1188-4d6f-9ed4-9618fddf1eab" (UID: "ed489f01-1188-4d6f-9ed4-9618fddf1eab"). InnerVolumeSpecName "kube-api-access-gztht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.599496 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f669e70-10cd-47da-abc9-84be80cb5cfb" (UID: "4f669e70-10cd-47da-abc9-84be80cb5cfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662276 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vctvt\" (UniqueName: \"kubernetes.io/projected/4f669e70-10cd-47da-abc9-84be80cb5cfb-kube-api-access-vctvt\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662314 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gztht\" (UniqueName: \"kubernetes.io/projected/ed489f01-1188-4d6f-9ed4-9618fddf1eab-kube-api-access-gztht\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662324 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662333 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.662342 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f669e70-10cd-47da-abc9-84be80cb5cfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.683142 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed489f01-1188-4d6f-9ed4-9618fddf1eab" (UID: "ed489f01-1188-4d6f-9ed4-9618fddf1eab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.726663 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.763698 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489f01-1188-4d6f-9ed4-9618fddf1eab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.765319 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.771624 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866307 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") pod \"316646c5-1898-417a-8bd7-00eeadfe1243\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866371 5120 generic.go:358] "Generic (PLEG): container finished" podID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866537 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") pod \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866566 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866576 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") pod \"316646c5-1898-417a-8bd7-00eeadfe1243\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866610 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") pod \"316646c5-1898-417a-8bd7-00eeadfe1243\" (UID: \"316646c5-1898-417a-8bd7-00eeadfe1243\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866904 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerDied","Data":"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866944 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-dpf6p" event={"ID":"17d1692e-e64c-415e-98c6-fc0e5c799fe0","Type":"ContainerDied","Data":"5b1a0b828474bfc01c65e742389b89ec9558f81701ba98898857a82e2cc1733f"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.866985 5120 scope.go:117] "RemoveContainer" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.867436 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") pod \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.867475 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") pod \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\" (UID: \"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.868648 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities" (OuterVolumeSpecName: "utilities") pod "316646c5-1898-417a-8bd7-00eeadfe1243" (UID: "316646c5-1898-417a-8bd7-00eeadfe1243"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.872137 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h" (OuterVolumeSpecName: "kube-api-access-lsv5h") pod "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" (UID: "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b"). InnerVolumeSpecName "kube-api-access-lsv5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.872550 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd" (OuterVolumeSpecName: "kube-api-access-kzfgd") pod "316646c5-1898-417a-8bd7-00eeadfe1243" (UID: "316646c5-1898-417a-8bd7-00eeadfe1243"). InnerVolumeSpecName "kube-api-access-kzfgd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874109 5120 generic.go:358] "Generic (PLEG): container finished" podID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874207 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q8d8" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874215 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.874263 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q8d8" event={"ID":"ed489f01-1188-4d6f-9ed4-9618fddf1eab","Type":"ContainerDied","Data":"1b3c4ff9732c93011b494f79b9052c81bdd854fe832d0d1aff9714069c08086b"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879656 5120 generic.go:358] "Generic (PLEG): container finished" podID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879811 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t67f7" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879828 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.879865 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t67f7" event={"ID":"df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b","Type":"ContainerDied","Data":"ab803e6a4d6bc8f6c5535f7b6ba4ab7280d0c0d527dc407d8f992ddd6ad5d49c"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.882136 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities" (OuterVolumeSpecName: "utilities") pod "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" (UID: "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.882494 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "316646c5-1898-417a-8bd7-00eeadfe1243" (UID: "316646c5-1898-417a-8bd7-00eeadfe1243"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883733 5120 generic.go:358] "Generic (PLEG): container finished" podID="316646c5-1898-417a-8bd7-00eeadfe1243" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883847 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883871 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp8qf" event={"ID":"316646c5-1898-417a-8bd7-00eeadfe1243","Type":"ContainerDied","Data":"b88cdc87cf3e9924bb751ee1a18fd60cd70c52d60437b53a435f731721d1f00b"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.883986 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp8qf" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.889786 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" exitCode=0 Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.889875 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.889912 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fztfm" event={"ID":"4f669e70-10cd-47da-abc9-84be80cb5cfb","Type":"ContainerDied","Data":"942f286364f00775972ff57ef7ee9a1b6d83531d392b957342335e79a3c8a683"} Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.890030 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fztfm" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.900210 5120 scope.go:117] "RemoveContainer" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.918900 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.929148 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2q8d8"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.929424 5120 scope.go:117] "RemoveContainer" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" Jan 22 11:54:14 crc kubenswrapper[5120]: E0122 11:54:14.930032 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30\": container with ID starting with c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30 not found: ID does not exist" containerID="c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930085 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30"} err="failed to get container status \"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30\": rpc error: code = NotFound desc = could not find container \"c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30\": container with ID starting with c1ed7efd3687998dabc1724dada2cb0471f8f9f4ce329e4f622a91d9529a5b30 not found: ID does not exist" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930115 5120 scope.go:117] "RemoveContainer" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:54:14 crc kubenswrapper[5120]: E0122 11:54:14.930744 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22\": container with ID starting with c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22 not found: ID does not exist" containerID="c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930800 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22"} err="failed to get container status \"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22\": rpc error: code = NotFound desc = could not find container \"c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22\": container with ID starting with c741f63c3c18c70fb74a3e1cc4574a0434a01a3203abe1ccedcf63dda5493f22 not found: ID does not exist" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.930864 5120 scope.go:117] "RemoveContainer" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.938296 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.953894 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fztfm"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.955670 5120 scope.go:117] "RemoveContainer" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.960609 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.966000 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp8qf"] Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.968771 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969072 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969179 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp" (OuterVolumeSpecName: "tmp") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969516 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") pod \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\" (UID: \"17d1692e-e64c-415e-98c6-fc0e5c799fe0\") " Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969814 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lsv5h\" (UniqueName: \"kubernetes.io/projected/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-kube-api-access-lsv5h\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969837 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969849 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316646c5-1898-417a-8bd7-00eeadfe1243-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969872 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969885 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzfgd\" (UniqueName: \"kubernetes.io/projected/316646c5-1898-417a-8bd7-00eeadfe1243-kube-api-access-kzfgd\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.969901 5120 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17d1692e-e64c-415e-98c6-fc0e5c799fe0-tmp\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.970863 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.973794 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8" (OuterVolumeSpecName: "kube-api-access-2fdm8") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "kube-api-access-2fdm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.974446 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "17d1692e-e64c-415e-98c6-fc0e5c799fe0" (UID: "17d1692e-e64c-415e-98c6-fc0e5c799fe0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:54:14 crc kubenswrapper[5120]: I0122 11:54:14.983715 5120 scope.go:117] "RemoveContainer" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.000681 5120 scope.go:117] "RemoveContainer" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.001469 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02\": container with ID starting with 36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02 not found: ID does not exist" containerID="36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.001523 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02"} err="failed to get container status \"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02\": rpc error: code = NotFound desc = could not find container \"36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02\": container with ID starting with 36cd9934f20a92aa13326a062a7c371f5422564071ae91c2740e1a07898b4c02 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.001551 5120 scope.go:117] "RemoveContainer" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.002266 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759\": container with ID starting with b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759 not found: ID does not exist" containerID="b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.002318 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759"} err="failed to get container status \"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759\": rpc error: code = NotFound desc = could not find container \"b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759\": container with ID starting with b00909583aa1447b916f95649d778fe12290cadd6b431d88809c3682cc826759 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.002351 5120 scope.go:117] "RemoveContainer" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.004266 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3\": container with ID starting with dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3 not found: ID does not exist" containerID="dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.004340 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3"} err="failed to get container status \"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3\": rpc error: code = NotFound desc = could not find container \"dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3\": container with ID starting with dc207be41a00ceee7de3c6651059410a76c90a309847c28dd6606649dc8328a3 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.004422 5120 scope.go:117] "RemoveContainer" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.027149 5120 scope.go:117] "RemoveContainer" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.051994 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" (UID: "df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.053131 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-nzw8g"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.063481 5120 scope.go:117] "RemoveContainer" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071602 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071641 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fdm8\" (UniqueName: \"kubernetes.io/projected/17d1692e-e64c-415e-98c6-fc0e5c799fe0-kube-api-access-2fdm8\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071656 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.071671 5120 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d1692e-e64c-415e-98c6-fc0e5c799fe0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.103166 5120 scope.go:117] "RemoveContainer" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.103934 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd\": container with ID starting with 0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd not found: ID does not exist" containerID="0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104031 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd"} err="failed to get container status \"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd\": rpc error: code = NotFound desc = could not find container \"0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd\": container with ID starting with 0a55a93788e2f3a3da24ed47901056711624f745dc882f8044ade2936144a4cd not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104072 5120 scope.go:117] "RemoveContainer" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.104596 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40\": container with ID starting with 6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40 not found: ID does not exist" containerID="6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104701 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40"} err="failed to get container status \"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40\": rpc error: code = NotFound desc = could not find container \"6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40\": container with ID starting with 6323b4a422b08b7fef939c6ed6bea5dc74a608973ed5a0ca42c7b5bd1a193d40 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.104794 5120 scope.go:117] "RemoveContainer" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.105135 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668\": container with ID starting with 985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668 not found: ID does not exist" containerID="985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.105174 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668"} err="failed to get container status \"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668\": rpc error: code = NotFound desc = could not find container \"985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668\": container with ID starting with 985bb517b1a5ceb43a9211611e90da3a2637d7edc83728d91f5fb480e9687668 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.105188 5120 scope.go:117] "RemoveContainer" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.123663 5120 scope.go:117] "RemoveContainer" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.180067 5120 scope.go:117] "RemoveContainer" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.218829 5120 scope.go:117] "RemoveContainer" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.219417 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42\": container with ID starting with 043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42 not found: ID does not exist" containerID="043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219457 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42"} err="failed to get container status \"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42\": rpc error: code = NotFound desc = could not find container \"043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42\": container with ID starting with 043c30ef82e1600d2b7aee310c29468c886daf6f11ea610b5aafacd7353aca42 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219486 5120 scope.go:117] "RemoveContainer" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.219764 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb\": container with ID starting with 78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb not found: ID does not exist" containerID="78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219792 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb"} err="failed to get container status \"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb\": rpc error: code = NotFound desc = could not find container \"78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb\": container with ID starting with 78a91413b2d3e4e902040629ec2a3493284930cedd944b03f3abad707da16bcb not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.219811 5120 scope.go:117] "RemoveContainer" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.220343 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152\": container with ID starting with c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152 not found: ID does not exist" containerID="c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.220448 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152"} err="failed to get container status \"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152\": rpc error: code = NotFound desc = could not find container \"c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152\": container with ID starting with c5a54dd8cce3cf9390074acb6e0b4e6f5774c6d5a39aade6bcee188cb33a4152 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.220523 5120 scope.go:117] "RemoveContainer" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.252899 5120 scope.go:117] "RemoveContainer" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.265574 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.276823 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-dpf6p"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.280411 5120 scope.go:117] "RemoveContainer" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.284487 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.290858 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t67f7"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.302471 5120 scope.go:117] "RemoveContainer" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.303083 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68\": container with ID starting with bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68 not found: ID does not exist" containerID="bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303135 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68"} err="failed to get container status \"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68\": rpc error: code = NotFound desc = could not find container \"bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68\": container with ID starting with bb1b2eda9dfc535bf2571cb8ca9c5b1fc9f5f3199ff1d0107b99fac41ee37f68 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303166 5120 scope.go:117] "RemoveContainer" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.303769 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe\": container with ID starting with 30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe not found: ID does not exist" containerID="30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303815 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe"} err="failed to get container status \"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe\": rpc error: code = NotFound desc = could not find container \"30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe\": container with ID starting with 30e86473793b92399bf3776be18ddc5b871c9f007c3f96eb1763cfef12eaf5fe not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.303844 5120 scope.go:117] "RemoveContainer" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" Jan 22 11:54:15 crc kubenswrapper[5120]: E0122 11:54:15.304315 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5\": container with ID starting with 0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5 not found: ID does not exist" containerID="0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.304342 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5"} err="failed to get container status \"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5\": rpc error: code = NotFound desc = could not find container \"0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5\": container with ID starting with 0a6ff4df62b5c4da4557f4c5e8baed180b5153d309f28b63bc73b55557f599b5 not found: ID does not exist" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.579350 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" path="/var/lib/kubelet/pods/17d1692e-e64c-415e-98c6-fc0e5c799fe0/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.580751 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" path="/var/lib/kubelet/pods/316646c5-1898-417a-8bd7-00eeadfe1243/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.581501 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" path="/var/lib/kubelet/pods/4f669e70-10cd-47da-abc9-84be80cb5cfb/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.582781 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" path="/var/lib/kubelet/pods/df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.583578 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" path="/var/lib/kubelet/pods/ed489f01-1188-4d6f-9ed4-9618fddf1eab/volumes" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.790332 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791116 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791137 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791152 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791158 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791170 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791177 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791188 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791194 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791202 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791207 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791217 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791222 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791230 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791236 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791246 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791251 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791260 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791266 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791273 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791278 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-utilities" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791289 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791294 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791301 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791306 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="extract-content" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791313 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791318 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791403 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791415 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="316646c5-1898-417a-8bd7-00eeadfe1243" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791422 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ed489f01-1188-4d6f-9ed4-9618fddf1eab" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791432 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791440 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f669e70-10cd-47da-abc9-84be80cb5cfb" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791448 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="df7a9c39-14a1-4d16-83bb-dd2b28dc6f7b" containerName="registry-server" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791578 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.791589 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d1692e-e64c-415e-98c6-fc0e5c799fe0" containerName="marketplace-operator" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.820045 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.820221 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.824042 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.883331 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.883390 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.883497 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.898132 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" event={"ID":"abdba773-b95f-4d73-bcb5-d36526f8e13d","Type":"ContainerStarted","Data":"fe540687eaae41d502a010521179ea9124a176308149bad985af24b6c88b8648"} Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.898193 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" event={"ID":"abdba773-b95f-4d73-bcb5-d36526f8e13d","Type":"ContainerStarted","Data":"ffaa94d3418ec37b8f0d5b883651fdf2ef991cfafc402247440a3b167ae4e76b"} Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.898395 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.904554 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.931267 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-nzw8g" podStartSLOduration=1.931235807 podStartE2EDuration="1.931235807s" podCreationTimestamp="2026-01-22 11:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:54:15.922534676 +0000 UTC m=+390.666483027" watchObservedRunningTime="2026-01-22 11:54:15.931235807 +0000 UTC m=+390.675184148" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984329 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984649 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.984931 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:15 crc kubenswrapper[5120]: I0122 11:54:15.985135 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.004198 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"redhat-marketplace-pn4sg\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.141625 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.605046 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.795084 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-srj7k"] Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.835630 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srj7k"] Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.835820 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.840113 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.903577 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzptw\" (UniqueName: \"kubernetes.io/projected/65ded1b5-0551-47c3-b32f-646318c3055a-kube-api-access-qzptw\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.903802 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-catalog-content\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.904087 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-utilities\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.916940 5120 generic.go:358] "Generic (PLEG): container finished" podID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" exitCode=0 Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.917099 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea"} Jan 22 11:54:16 crc kubenswrapper[5120]: I0122 11:54:16.917206 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerStarted","Data":"be77ef2cfeb1733dbed252c7c38f2239d4e5745805f1f6b72bcb11727aa3ba6e"} Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.005283 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-catalog-content\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.005361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-utilities\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.005930 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzptw\" (UniqueName: \"kubernetes.io/projected/65ded1b5-0551-47c3-b32f-646318c3055a-kube-api-access-qzptw\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.006522 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-catalog-content\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.006587 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65ded1b5-0551-47c3-b32f-646318c3055a-utilities\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.027770 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzptw\" (UniqueName: \"kubernetes.io/projected/65ded1b5-0551-47c3-b32f-646318c3055a-kube-api-access-qzptw\") pod \"redhat-operators-srj7k\" (UID: \"65ded1b5-0551-47c3-b32f-646318c3055a\") " pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.173525 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.610052 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srj7k"] Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.925411 5120 generic.go:358] "Generic (PLEG): container finished" podID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" exitCode=0 Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.925571 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996"} Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.928627 5120 generic.go:358] "Generic (PLEG): container finished" podID="65ded1b5-0551-47c3-b32f-646318c3055a" containerID="7ee90baec01e23d823fc00f77c1c09aea16cd2dea6abd1149b9f9a903c101f33" exitCode=0 Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.929869 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerDied","Data":"7ee90baec01e23d823fc00f77c1c09aea16cd2dea6abd1149b9f9a903c101f33"} Jan 22 11:54:17 crc kubenswrapper[5120]: I0122 11:54:17.930093 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerStarted","Data":"75f40da878e27c27b7b3d51f7df08d6516291f4ce894aa192378c535afb294eb"} Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.194279 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7xvj9"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.203566 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.206408 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.219436 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xvj9"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.325592 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-utilities\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.325672 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wt6\" (UniqueName: \"kubernetes.io/projected/90af06b6-8b8b-48f3-bfb2-541ef60610fa-kube-api-access-69wt6\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.325801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-catalog-content\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.430683 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-utilities\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.430786 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-69wt6\" (UniqueName: \"kubernetes.io/projected/90af06b6-8b8b-48f3-bfb2-541ef60610fa-kube-api-access-69wt6\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.430892 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-catalog-content\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.431619 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-catalog-content\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.431727 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90af06b6-8b8b-48f3-bfb2-541ef60610fa-utilities\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.458243 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wt6\" (UniqueName: \"kubernetes.io/projected/90af06b6-8b8b-48f3-bfb2-541ef60610fa-kube-api-access-69wt6\") pod \"certified-operators-7xvj9\" (UID: \"90af06b6-8b8b-48f3-bfb2-541ef60610fa\") " pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.534176 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.641114 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bs6c2"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.648924 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.659625 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bs6c2"] Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.735632 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736116 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-trusted-ca\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736151 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-tls\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736276 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jmh\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-kube-api-access-d7jmh\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736398 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762bc2c2-d5b7-4508-840f-e8043b9e8729-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736497 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736547 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762bc2c2-d5b7-4508-840f-e8043b9e8729-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.736618 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-certificates\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.788812 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838314 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-certificates\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838395 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-trusted-ca\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838430 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-tls\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.838448 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jmh\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-kube-api-access-d7jmh\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.840815 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762bc2c2-d5b7-4508-840f-e8043b9e8729-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.840947 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.841046 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762bc2c2-d5b7-4508-840f-e8043b9e8729-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.841613 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762bc2c2-d5b7-4508-840f-e8043b9e8729-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.842075 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-certificates\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.847055 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-registry-tls\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.847755 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762bc2c2-d5b7-4508-840f-e8043b9e8729-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.851333 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762bc2c2-d5b7-4508-840f-e8043b9e8729-trusted-ca\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.864608 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jmh\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-kube-api-access-d7jmh\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.869286 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762bc2c2-d5b7-4508-840f-e8043b9e8729-bound-sa-token\") pod \"image-registry-5d9d95bf5b-bs6c2\" (UID: \"762bc2c2-d5b7-4508-840f-e8043b9e8729\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.936320 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerStarted","Data":"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce"} Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.940854 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerStarted","Data":"deeebbdc599aa21a07f910214d49544d44bec669410e0dad93711ff84ede3673"} Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.962264 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pn4sg" podStartSLOduration=3.349949613 podStartE2EDuration="3.962236958s" podCreationTimestamp="2026-01-22 11:54:15 +0000 UTC" firstStartedPulling="2026-01-22 11:54:16.91868749 +0000 UTC m=+391.662635831" lastFinishedPulling="2026-01-22 11:54:17.530974835 +0000 UTC m=+392.274923176" observedRunningTime="2026-01-22 11:54:18.957365756 +0000 UTC m=+393.701314097" watchObservedRunningTime="2026-01-22 11:54:18.962236958 +0000 UTC m=+393.706185299" Jan 22 11:54:18 crc kubenswrapper[5120]: I0122 11:54:18.974091 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.135926 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7xvj9"] Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.203859 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jck2s"] Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.237696 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jck2s"] Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.237994 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.241176 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.349251 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pxw\" (UniqueName: \"kubernetes.io/projected/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-kube-api-access-c8pxw\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.349726 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-utilities\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.349797 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-catalog-content\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.451394 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-utilities\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.451496 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-catalog-content\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.451545 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pxw\" (UniqueName: \"kubernetes.io/projected/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-kube-api-access-c8pxw\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.452735 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-utilities\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.452843 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-catalog-content\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.477881 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pxw\" (UniqueName: \"kubernetes.io/projected/3a14b1ee-af9d-4a1e-863f-c69c216c25d2-kube-api-access-c8pxw\") pod \"community-operators-jck2s\" (UID: \"3a14b1ee-af9d-4a1e-863f-c69c216c25d2\") " pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.501182 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-bs6c2"] Jan 22 11:54:19 crc kubenswrapper[5120]: W0122 11:54:19.504530 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod762bc2c2_d5b7_4508_840f_e8043b9e8729.slice/crio-fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85 WatchSource:0}: Error finding container fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85: Status 404 returned error can't find the container with id fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85 Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.597574 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.952727 5120 generic.go:358] "Generic (PLEG): container finished" podID="90af06b6-8b8b-48f3-bfb2-541ef60610fa" containerID="7b4d3d345283b42169dd141b69a4f9d99e8dc1bc2646babddf8c2211a8a99a8f" exitCode=0 Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.952868 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerDied","Data":"7b4d3d345283b42169dd141b69a4f9d99e8dc1bc2646babddf8c2211a8a99a8f"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.953451 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerStarted","Data":"a2030e4b672505bce8a94fc526c57daa5ff25ec625e4434e96bdddcbf471ca63"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.956667 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" event={"ID":"762bc2c2-d5b7-4508-840f-e8043b9e8729","Type":"ContainerStarted","Data":"d162068f9a4de740a5e6f36adb3441fc33cfff04a7b7ec1d8c5f15407bca9a38"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.956700 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" event={"ID":"762bc2c2-d5b7-4508-840f-e8043b9e8729","Type":"ContainerStarted","Data":"fb9fae42b28e16da26285bfa0524ee9223b875c5bc21a3ef2b12a6e893c44b85"} Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.957177 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.964923 5120 generic.go:358] "Generic (PLEG): container finished" podID="65ded1b5-0551-47c3-b32f-646318c3055a" containerID="deeebbdc599aa21a07f910214d49544d44bec669410e0dad93711ff84ede3673" exitCode=0 Jan 22 11:54:19 crc kubenswrapper[5120]: I0122 11:54:19.965649 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerDied","Data":"deeebbdc599aa21a07f910214d49544d44bec669410e0dad93711ff84ede3673"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.004938 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" podStartSLOduration=2.004906899 podStartE2EDuration="2.004906899s" podCreationTimestamp="2026-01-22 11:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:54:20.00333552 +0000 UTC m=+394.747284031" watchObservedRunningTime="2026-01-22 11:54:20.004906899 +0000 UTC m=+394.748855250" Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.060801 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jck2s"] Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.972849 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14b1ee-af9d-4a1e-863f-c69c216c25d2" containerID="649db102ccc0f8cb8cd3bd319946592ecb9fc3671a3f04f08f8b9073bff96551" exitCode=0 Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.972995 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerDied","Data":"649db102ccc0f8cb8cd3bd319946592ecb9fc3671a3f04f08f8b9073bff96551"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.973454 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerStarted","Data":"2a0f57c0aa97cf7dcf95dd065cd65721088c61c61c21a30486701169d1432c11"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.978509 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerStarted","Data":"91edefe80e0e0cca5e84c20ba39d057f8947cb6f2d19f245571dd370d32d1d53"} Jan 22 11:54:20 crc kubenswrapper[5120]: I0122 11:54:20.986722 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srj7k" event={"ID":"65ded1b5-0551-47c3-b32f-646318c3055a","Type":"ContainerStarted","Data":"d0b6a0bd27b9a1fed369139925f0f56690de1df2dfc81ef1bb38d261dd735ba3"} Jan 22 11:54:21 crc kubenswrapper[5120]: I0122 11:54:21.040712 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-srj7k" podStartSLOduration=4.262153955 podStartE2EDuration="5.040682985s" podCreationTimestamp="2026-01-22 11:54:16 +0000 UTC" firstStartedPulling="2026-01-22 11:54:17.929676253 +0000 UTC m=+392.673624594" lastFinishedPulling="2026-01-22 11:54:18.708205293 +0000 UTC m=+393.452153624" observedRunningTime="2026-01-22 11:54:21.039112416 +0000 UTC m=+395.783060787" watchObservedRunningTime="2026-01-22 11:54:21.040682985 +0000 UTC m=+395.784631346" Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.005750 5120 generic.go:358] "Generic (PLEG): container finished" podID="3a14b1ee-af9d-4a1e-863f-c69c216c25d2" containerID="d53e2125a316325485ec382f823ee992c396b419ff0e3304341d7a5ba55c81f2" exitCode=0 Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.005852 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerDied","Data":"d53e2125a316325485ec382f823ee992c396b419ff0e3304341d7a5ba55c81f2"} Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.011240 5120 generic.go:358] "Generic (PLEG): container finished" podID="90af06b6-8b8b-48f3-bfb2-541ef60610fa" containerID="91edefe80e0e0cca5e84c20ba39d057f8947cb6f2d19f245571dd370d32d1d53" exitCode=0 Jan 22 11:54:22 crc kubenswrapper[5120]: I0122 11:54:22.012075 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerDied","Data":"91edefe80e0e0cca5e84c20ba39d057f8947cb6f2d19f245571dd370d32d1d53"} Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.017719 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jck2s" event={"ID":"3a14b1ee-af9d-4a1e-863f-c69c216c25d2","Type":"ContainerStarted","Data":"07616b4ab0fbe72a8b40083365529f358718d9f1e3bbd8c71576e020bf90a90a"} Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.020165 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7xvj9" event={"ID":"90af06b6-8b8b-48f3-bfb2-541ef60610fa","Type":"ContainerStarted","Data":"6b4da77cc17f35988344112f756922a38f85f7da0088f93e78f0a8d17cdb8c38"} Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.040787 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jck2s" podStartSLOduration=3.468112959 podStartE2EDuration="4.040766404s" podCreationTimestamp="2026-01-22 11:54:19 +0000 UTC" firstStartedPulling="2026-01-22 11:54:20.973805985 +0000 UTC m=+395.717754326" lastFinishedPulling="2026-01-22 11:54:21.54645943 +0000 UTC m=+396.290407771" observedRunningTime="2026-01-22 11:54:23.036941728 +0000 UTC m=+397.780890079" watchObservedRunningTime="2026-01-22 11:54:23.040766404 +0000 UTC m=+397.784714745" Jan 22 11:54:23 crc kubenswrapper[5120]: I0122 11:54:23.062049 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7xvj9" podStartSLOduration=4.217636171 podStartE2EDuration="5.062029188s" podCreationTimestamp="2026-01-22 11:54:18 +0000 UTC" firstStartedPulling="2026-01-22 11:54:19.954169173 +0000 UTC m=+394.698117514" lastFinishedPulling="2026-01-22 11:54:20.79856219 +0000 UTC m=+395.542510531" observedRunningTime="2026-01-22 11:54:23.060000138 +0000 UTC m=+397.803948489" watchObservedRunningTime="2026-01-22 11:54:23.062029188 +0000 UTC m=+397.805977529" Jan 22 11:54:26 crc kubenswrapper[5120]: I0122 11:54:26.143359 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:26 crc kubenswrapper[5120]: I0122 11:54:26.143831 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:26 crc kubenswrapper[5120]: I0122 11:54:26.187181 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.092133 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.173849 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.174512 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:27 crc kubenswrapper[5120]: I0122 11:54:27.214257 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.088855 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-srj7k" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.534826 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.535505 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:28 crc kubenswrapper[5120]: I0122 11:54:28.579041 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.089791 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7xvj9" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.598372 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.598823 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:29 crc kubenswrapper[5120]: I0122 11:54:29.643424 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:30 crc kubenswrapper[5120]: I0122 11:54:30.104126 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jck2s" Jan 22 11:54:40 crc kubenswrapper[5120]: I0122 11:54:40.994113 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-bs6c2" Jan 22 11:54:41 crc kubenswrapper[5120]: I0122 11:54:41.061789 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:55:01 crc kubenswrapper[5120]: I0122 11:55:01.972828 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:55:01 crc kubenswrapper[5120]: I0122 11:55:01.973556 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.107537 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" containerID="cri-o://e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c" gracePeriod=30 Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.284251 5120 generic.go:358] "Generic (PLEG): container finished" podID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerID="e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c" exitCode=0 Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.284537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerDied","Data":"e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c"} Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.516818 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.622660 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623018 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623075 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623101 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623177 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623313 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623376 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.623433 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") pod \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\" (UID: \"e16334d5-3fa8-48de-a8e0-af1f9fa51926\") " Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.624267 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.624320 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.630138 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.630148 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.630407 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.636066 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.642828 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w" (OuterVolumeSpecName: "kube-api-access-5mg7w") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "kube-api-access-5mg7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.651302 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e16334d5-3fa8-48de-a8e0-af1f9fa51926" (UID: "e16334d5-3fa8-48de-a8e0-af1f9fa51926"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725224 5120 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725271 5120 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e16334d5-3fa8-48de-a8e0-af1f9fa51926-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725291 5120 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725305 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mg7w\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-kube-api-access-5mg7w\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725317 5120 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e16334d5-3fa8-48de-a8e0-af1f9fa51926-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725328 5120 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:06 crc kubenswrapper[5120]: I0122 11:55:06.725341 5120 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e16334d5-3fa8-48de-a8e0-af1f9fa51926-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.291582 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" event={"ID":"e16334d5-3fa8-48de-a8e0-af1f9fa51926","Type":"ContainerDied","Data":"30738daefd26ec1936e210196218667fac004e9fbe6021d4a2265a6c692aabac"} Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.291646 5120 scope.go:117] "RemoveContainer" containerID="e1bbb65cdff1f34e73b67d92dec5e5520f1d8e88ebcd7bef109e31c63042510c" Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.293232 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-49gkx" Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.335278 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.343088 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-49gkx"] Jan 22 11:55:07 crc kubenswrapper[5120]: I0122 11:55:07.582679 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" path="/var/lib/kubelet/pods/e16334d5-3fa8-48de-a8e0-af1f9fa51926/volumes" Jan 22 11:55:31 crc kubenswrapper[5120]: I0122 11:55:31.972333 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:55:31 crc kubenswrapper[5120]: I0122 11:55:31.974208 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.159080 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.160231 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.160245 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.160376 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e16334d5-3fa8-48de-a8e0-af1f9fa51926" containerName="registry" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.182351 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.182522 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.186429 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.187655 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.188090 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.308495 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"auto-csr-approver-29484716-phf4d\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.409343 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"auto-csr-approver-29484716-phf4d\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.429872 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"auto-csr-approver-29484716-phf4d\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.499885 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:00 crc kubenswrapper[5120]: I0122 11:56:00.951840 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 11:56:00 crc kubenswrapper[5120]: W0122 11:56:00.960604 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda45690da_bfac_4359_88d2_e604fb44508e.slice/crio-a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977 WatchSource:0}: Error finding container a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977: Status 404 returned error can't find the container with id a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977 Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.642440 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484716-phf4d" event={"ID":"a45690da-bfac-4359-88d2-e604fb44508e","Type":"ContainerStarted","Data":"a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977"} Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.973258 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.973407 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.973465 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.974197 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:56:01 crc kubenswrapper[5120]: I0122 11:56:01.974262 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10" gracePeriod=600 Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.651543 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10" exitCode=0 Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.651626 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10"} Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.651977 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7"} Jan 22 11:56:02 crc kubenswrapper[5120]: I0122 11:56:02.652005 5120 scope.go:117] "RemoveContainer" containerID="850c532d98a8bbc54351ca3b791b2314fd23331e43f96e8f0161ba791781ae24" Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.499785 5120 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-5v2zf" Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.524293 5120 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-5v2zf" Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.674159 5120 generic.go:358] "Generic (PLEG): container finished" podID="a45690da-bfac-4359-88d2-e604fb44508e" containerID="50058b8b91e5dd9329c621c05d95a98bf79e0360bf7ed78ecfbcba7624fecffa" exitCode=0 Jan 22 11:56:04 crc kubenswrapper[5120]: I0122 11:56:04.674292 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484716-phf4d" event={"ID":"a45690da-bfac-4359-88d2-e604fb44508e","Type":"ContainerDied","Data":"50058b8b91e5dd9329c621c05d95a98bf79e0360bf7ed78ecfbcba7624fecffa"} Jan 22 11:56:05 crc kubenswrapper[5120]: I0122 11:56:05.526730 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 11:51:04 +0000 UTC" deadline="2026-02-16 05:26:13.822479534 +0000 UTC" Jan 22 11:56:05 crc kubenswrapper[5120]: I0122 11:56:05.526780 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="593h30m8.295704s" Jan 22 11:56:05 crc kubenswrapper[5120]: I0122 11:56:05.906876 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.068163 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") pod \"a45690da-bfac-4359-88d2-e604fb44508e\" (UID: \"a45690da-bfac-4359-88d2-e604fb44508e\") " Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.074368 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8" (OuterVolumeSpecName: "kube-api-access-gfwr8") pod "a45690da-bfac-4359-88d2-e604fb44508e" (UID: "a45690da-bfac-4359-88d2-e604fb44508e"). InnerVolumeSpecName "kube-api-access-gfwr8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.169728 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gfwr8\" (UniqueName: \"kubernetes.io/projected/a45690da-bfac-4359-88d2-e604fb44508e-kube-api-access-gfwr8\") on node \"crc\" DevicePath \"\"" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.527375 5120 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-21 11:51:04 +0000 UTC" deadline="2026-02-18 04:33:37.611037234 +0000 UTC" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.527413 5120 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="640h37m31.083626637s" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.688313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484716-phf4d" event={"ID":"a45690da-bfac-4359-88d2-e604fb44508e","Type":"ContainerDied","Data":"a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977"} Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.688360 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a89b7357e55c42a1482e9c53a2a331e370948f071ff18748e44904052b848977" Jan 22 11:56:06 crc kubenswrapper[5120]: I0122 11:56:06.688359 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484716-phf4d" Jan 22 11:57:45 crc kubenswrapper[5120]: I0122 11:57:45.831229 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:57:45 crc kubenswrapper[5120]: I0122 11:57:45.834586 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.145542 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.146765 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a45690da-bfac-4359-88d2-e604fb44508e" containerName="oc" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.146781 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45690da-bfac-4359-88d2-e604fb44508e" containerName="oc" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.146945 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a45690da-bfac-4359-88d2-e604fb44508e" containerName="oc" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.168190 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.168233 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.171523 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.171586 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.174838 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.279894 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"auto-csr-approver-29484718-tbtpd\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.381948 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"auto-csr-approver-29484718-tbtpd\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.418081 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"auto-csr-approver-29484718-tbtpd\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.504264 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.770344 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 11:58:00 crc kubenswrapper[5120]: I0122 11:58:00.775664 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 11:58:01 crc kubenswrapper[5120]: I0122 11:58:01.472403 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" event={"ID":"b79a0076-aa90-4841-9865-b94aef438d2e","Type":"ContainerStarted","Data":"badf357652e9ad1468f125bcc21c5e2857abc6f4573914f5411d17f0eb8c35f3"} Jan 22 11:58:02 crc kubenswrapper[5120]: I0122 11:58:02.481114 5120 generic.go:358] "Generic (PLEG): container finished" podID="b79a0076-aa90-4841-9865-b94aef438d2e" containerID="48535da82209ba80a74337bfe4adf5c3fb5d1066acf6b74856b7a35e8ae721fa" exitCode=0 Jan 22 11:58:02 crc kubenswrapper[5120]: I0122 11:58:02.481222 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" event={"ID":"b79a0076-aa90-4841-9865-b94aef438d2e","Type":"ContainerDied","Data":"48535da82209ba80a74337bfe4adf5c3fb5d1066acf6b74856b7a35e8ae721fa"} Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.755908 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.831349 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") pod \"b79a0076-aa90-4841-9865-b94aef438d2e\" (UID: \"b79a0076-aa90-4841-9865-b94aef438d2e\") " Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.838538 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg" (OuterVolumeSpecName: "kube-api-access-5gzsg") pod "b79a0076-aa90-4841-9865-b94aef438d2e" (UID: "b79a0076-aa90-4841-9865-b94aef438d2e"). InnerVolumeSpecName "kube-api-access-5gzsg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:58:03 crc kubenswrapper[5120]: I0122 11:58:03.933150 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5gzsg\" (UniqueName: \"kubernetes.io/projected/b79a0076-aa90-4841-9865-b94aef438d2e-kube-api-access-5gzsg\") on node \"crc\" DevicePath \"\"" Jan 22 11:58:04 crc kubenswrapper[5120]: I0122 11:58:04.498701 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" event={"ID":"b79a0076-aa90-4841-9865-b94aef438d2e","Type":"ContainerDied","Data":"badf357652e9ad1468f125bcc21c5e2857abc6f4573914f5411d17f0eb8c35f3"} Jan 22 11:58:04 crc kubenswrapper[5120]: I0122 11:58:04.498770 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="badf357652e9ad1468f125bcc21c5e2857abc6f4573914f5411d17f0eb8c35f3" Jan 22 11:58:04 crc kubenswrapper[5120]: I0122 11:58:04.498726 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484718-tbtpd" Jan 22 11:58:31 crc kubenswrapper[5120]: I0122 11:58:31.972372 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:58:31 crc kubenswrapper[5120]: I0122 11:58:31.973094 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.458434 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.461167 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" containerID="cri-o://b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.461258 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" containerID="cri-o://53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.682986 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2mf7v"] Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684012 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" containerID="cri-o://fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684069 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" containerID="cri-o://1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684093 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684170 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" containerID="cri-o://bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684163 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" containerID="cri-o://3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684148 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" containerID="cri-o://a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.684014 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" containerID="cri-o://e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.750199 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" containerID="cri-o://29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" gracePeriod=30 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.911159 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.911205 5120 generic.go:358] "Generic (PLEG): container finished" podID="67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087" containerID="d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59" exitCode=2 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.911307 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerDied","Data":"d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.912998 5120 scope.go:117] "RemoveContainer" containerID="d29b8141fbabedfe7a0b24544216f57974fa5374814f1bca04930180d84aef59" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913740 5120 generic.go:358] "Generic (PLEG): container finished" podID="cdb50da0-eb06-4959-b8da-70919924f77e" containerID="53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913795 5120 generic.go:358] "Generic (PLEG): container finished" podID="cdb50da0-eb06-4959-b8da-70919924f77e" containerID="b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913813 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerDied","Data":"53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.913846 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerDied","Data":"b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.920443 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-acl-logging/0.log" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921017 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-controller/0.log" Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921521 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921550 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921559 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921572 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921580 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" exitCode=0 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921590 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" exitCode=143 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921600 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" exitCode=143 Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.921891 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922060 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922091 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922171 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922183 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922197 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.922299 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25"} Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.973215 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:59:01 crc kubenswrapper[5120]: I0122 11:59:01.973353 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.471138 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-acl-logging/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.472199 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-controller/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.472831 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.475265 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.522316 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527114 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527189 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.524086 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9xdkb"] Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527222 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.522431 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527213 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527271 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527305 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527312 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527423 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527422 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527462 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527495 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527526 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527581 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527616 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527667 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527625 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527713 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log" (OuterVolumeSpecName: "node-log") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527688 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527739 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527800 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527800 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527853 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" containerName="oc" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527905 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" containerName="oc" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527904 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527842 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527918 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527950 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527902 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527996 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528006 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528034 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528042 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528093 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528122 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.527800 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528059 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528175 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528192 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kubecfg-setup" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528198 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kubecfg-setup" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528218 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528225 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528235 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528241 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528247 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528252 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528258 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528264 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528272 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528278 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528287 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528308 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528309 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") pod \"cdb50da0-eb06-4959-b8da-70919924f77e\" (UID: \"cdb50da0-eb06-4959-b8da-70919924f77e\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528368 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528406 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket" (OuterVolumeSpecName: "log-socket") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528428 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528456 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="ovnkube-cluster-manager" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528470 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528485 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="sbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528495 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovn-acl-logging" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528505 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528514 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" containerName="oc" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528524 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="nbdb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528535 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="ovnkube-controller" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528543 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="northd" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528554 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerName="kube-rbac-proxy-node" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528563 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" containerName="kube-rbac-proxy" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528751 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528828 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528865 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528883 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash" (OuterVolumeSpecName: "host-slash") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528913 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.528472 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") pod \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\" (UID: \"dd62bdde-a6c1-42b3-9585-ba64c63cbb51\") " Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529513 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529534 5120 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529547 5120 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529559 5120 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529574 5120 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529587 5120 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529601 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529612 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529625 5120 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529636 5120 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529646 5120 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529660 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529671 5120 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529683 5120 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529697 5120 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529710 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529722 5120 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529735 5120 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdb50da0-eb06-4959-b8da-70919924f77e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.529745 5120 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.534783 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm" (OuterVolumeSpecName: "kube-api-access-zdzrm") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "kube-api-access-zdzrm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.535585 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.537071 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m" (OuterVolumeSpecName: "kube-api-access-9lt4m") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "kube-api-access-9lt4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.538294 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "cdb50da0-eb06-4959-b8da-70919924f77e" (UID: "cdb50da0-eb06-4959-b8da-70919924f77e"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.547422 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dd62bdde-a6c1-42b3-9585-ba64c63cbb51" (UID: "dd62bdde-a6c1-42b3-9585-ba64c63cbb51"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.579682 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9"] Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.579935 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.584761 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630528 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-kubelet\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630583 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-etc-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630615 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-bin\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630631 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630730 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-ovn\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630795 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-config\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630836 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-env-overrides\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630879 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-log-socket\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.630932 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-node-log\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631096 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd7kb\" (UniqueName: \"kubernetes.io/projected/2b921c3f-0298-48a5-8020-2e7932ce381a-kube-api-access-jd7kb\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631120 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-systemd-units\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631137 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck8wr\" (UniqueName: \"kubernetes.io/projected/f8707e23-b20a-4547-938b-1938b7cd5b7d-kube-api-access-ck8wr\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631162 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631260 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovn-node-metrics-cert\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631298 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-netns\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631329 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631397 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-slash\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631420 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-var-lib-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631447 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b921c3f-0298-48a5-8020-2e7932ce381a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631489 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-netd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631513 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-script-lib\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631531 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-systemd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631719 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lt4m\" (UniqueName: \"kubernetes.io/projected/cdb50da0-eb06-4959-b8da-70919924f77e-kube-api-access-9lt4m\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631743 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdb50da0-eb06-4959-b8da-70919924f77e-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631756 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdzrm\" (UniqueName: \"kubernetes.io/projected/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-kube-api-access-zdzrm\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631768 5120 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.631798 5120 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dd62bdde-a6c1-42b3-9585-ba64c63cbb51-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733050 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovn-node-metrics-cert\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733142 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-netns\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-netns\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-run-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733217 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733238 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-slash\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733253 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-var-lib-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733272 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b921c3f-0298-48a5-8020-2e7932ce381a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733293 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-netd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733307 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-script-lib\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733316 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-slash\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733326 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733346 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-systemd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733364 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-kubelet\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733384 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-etc-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733419 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-bin\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733438 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733454 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-ovn\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733476 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-config\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733492 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-env-overrides\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-log-socket\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733572 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-node-log\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733608 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jd7kb\" (UniqueName: \"kubernetes.io/projected/2b921c3f-0298-48a5-8020-2e7932ce381a-kube-api-access-jd7kb\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733629 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-systemd-units\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733645 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ck8wr\" (UniqueName: \"kubernetes.io/projected/f8707e23-b20a-4547-938b-1938b7cd5b7d-kube-api-access-ck8wr\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733881 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733921 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-bin\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733947 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-var-lib-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.733989 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-log-socket\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734037 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734072 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-ovn\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734090 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734129 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-node-log\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734346 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-systemd-units\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-cni-netd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734642 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-run-systemd\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734875 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-host-kubelet\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734901 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f8707e23-b20a-4547-938b-1938b7cd5b7d-etc-openvswitch\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734935 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-config\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734937 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-env-overrides\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.734960 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2b921c3f-0298-48a5-8020-2e7932ce381a-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.735669 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovnkube-script-lib\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.741723 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2b921c3f-0298-48a5-8020-2e7932ce381a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.742617 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8707e23-b20a-4547-938b-1938b7cd5b7d-ovn-node-metrics-cert\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.759555 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd7kb\" (UniqueName: \"kubernetes.io/projected/2b921c3f-0298-48a5-8020-2e7932ce381a-kube-api-access-jd7kb\") pod \"ovnkube-control-plane-97c9b6c48-lvft9\" (UID: \"2b921c3f-0298-48a5-8020-2e7932ce381a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.766841 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck8wr\" (UniqueName: \"kubernetes.io/projected/f8707e23-b20a-4547-938b-1938b7cd5b7d-kube-api-access-ck8wr\") pod \"ovnkube-node-9xdkb\" (UID: \"f8707e23-b20a-4547-938b-1938b7cd5b7d\") " pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.925373 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.929787 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.929949 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4lzht" event={"ID":"67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087","Type":"ContainerStarted","Data":"6d1ed07fd41158a3e43ec2ad9f9b07ddffc584f50ca4bb7898e60f5cccb1dffa"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.932629 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" event={"ID":"cdb50da0-eb06-4959-b8da-70919924f77e","Type":"ContainerDied","Data":"20963fbe51218d226586341531cebabcba165784d34f9b709674547be7d8df72"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.932707 5120 scope.go:117] "RemoveContainer" containerID="53d59b7d2c319aaf356a45432146f39c690dafb55e7dcf1cae4ae5ee99919935" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.933148 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.933918 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.938771 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-acl-logging/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939425 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2mf7v_dd62bdde-a6c1-42b3-9585-ba64c63cbb51/ovn-controller/0.log" Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939826 5120 generic.go:358] "Generic (PLEG): container finished" podID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" exitCode=0 Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939928 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.939990 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" event={"ID":"dd62bdde-a6c1-42b3-9585-ba64c63cbb51","Type":"ContainerDied","Data":"948f3922f0403f01af9c080b4700105b9cfcfffd97d2155e3cc2c89092d9038d"} Jan 22 11:59:02 crc kubenswrapper[5120]: I0122 11:59:02.940230 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2mf7v" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.004040 5120 scope.go:117] "RemoveContainer" containerID="b21acaba3cb296157d5914b47ec901abef4ecd818f666b1cfb316d247e9b6411" Jan 22 11:59:03 crc kubenswrapper[5120]: W0122 11:59:03.006706 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b921c3f_0298_48a5_8020_2e7932ce381a.slice/crio-7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434 WatchSource:0}: Error finding container 7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434: Status 404 returned error can't find the container with id 7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434 Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.023227 5120 scope.go:117] "RemoveContainer" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.024739 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.028531 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-xzh79"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.057335 5120 scope.go:117] "RemoveContainer" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.071383 5120 scope.go:117] "RemoveContainer" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.083303 5120 scope.go:117] "RemoveContainer" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.097256 5120 scope.go:117] "RemoveContainer" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.110936 5120 scope.go:117] "RemoveContainer" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.119370 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2mf7v"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.119413 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2mf7v"] Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.128004 5120 scope.go:117] "RemoveContainer" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.146404 5120 scope.go:117] "RemoveContainer" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.173330 5120 scope.go:117] "RemoveContainer" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.194392 5120 scope.go:117] "RemoveContainer" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.195290 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42\": container with ID starting with 29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42 not found: ID does not exist" containerID="29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.195352 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42"} err="failed to get container status \"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42\": rpc error: code = NotFound desc = could not find container \"29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42\": container with ID starting with 29c29478ae7505ea16587db05884339bd9c66ee1da87d8da71e4d78fa0821e42 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.195382 5120 scope.go:117] "RemoveContainer" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.196425 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac\": container with ID starting with 3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac not found: ID does not exist" containerID="3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196465 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac"} err="failed to get container status \"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac\": rpc error: code = NotFound desc = could not find container \"3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac\": container with ID starting with 3f54e9ea68daffd338ce4d1b48fc95b48c8f4454371da3d34787786d2ec02aac not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196485 5120 scope.go:117] "RemoveContainer" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.196830 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31\": container with ID starting with fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31 not found: ID does not exist" containerID="fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196879 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31"} err="failed to get container status \"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31\": rpc error: code = NotFound desc = could not find container \"fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31\": container with ID starting with fa6924cab3fb62a3d082f9ba370a96e5e7ab2d47c44c268324b727cb6cfbcd31 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.196909 5120 scope.go:117] "RemoveContainer" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.197412 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf\": container with ID starting with a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf not found: ID does not exist" containerID="a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197468 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf"} err="failed to get container status \"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf\": rpc error: code = NotFound desc = could not find container \"a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf\": container with ID starting with a52fe62265acc53f59227988efecf2209707222abdac4d713d0a858d3eeb31cf not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197489 5120 scope.go:117] "RemoveContainer" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.197763 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860\": container with ID starting with f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860 not found: ID does not exist" containerID="f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197845 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860"} err="failed to get container status \"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860\": rpc error: code = NotFound desc = could not find container \"f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860\": container with ID starting with f092db392417f256b4f0135f1ff3ff3d4129b64b53982c580d3655bc52b38860 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.197909 5120 scope.go:117] "RemoveContainer" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.198233 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7\": container with ID starting with e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7 not found: ID does not exist" containerID="e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198268 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7"} err="failed to get container status \"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7\": rpc error: code = NotFound desc = could not find container \"e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7\": container with ID starting with e6f598572d7ee3f4456ac54c210e204149f4a9ec71c387867d3b396283eafec7 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198288 5120 scope.go:117] "RemoveContainer" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.198638 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f\": container with ID starting with 1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f not found: ID does not exist" containerID="1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198669 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f"} err="failed to get container status \"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f\": rpc error: code = NotFound desc = could not find container \"1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f\": container with ID starting with 1c8b54f45344390a57a15807f13fc415b25522bda483800e1e6b4e1a80d11f4f not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198683 5120 scope.go:117] "RemoveContainer" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.198907 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25\": container with ID starting with bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25 not found: ID does not exist" containerID="bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198974 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25"} err="failed to get container status \"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25\": rpc error: code = NotFound desc = could not find container \"bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25\": container with ID starting with bb9a1f9ecf9941c93d405464147ed7fce485a179d00bfa3094934d0400409f25 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.198994 5120 scope.go:117] "RemoveContainer" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" Jan 22 11:59:03 crc kubenswrapper[5120]: E0122 11:59:03.199292 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356\": container with ID starting with 3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356 not found: ID does not exist" containerID="3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.199329 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356"} err="failed to get container status \"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356\": rpc error: code = NotFound desc = could not find container \"3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356\": container with ID starting with 3779fe53a1bd1ecb3df812f8ab103a8b1e9c3b1c7d9ac86e1b961d20be69d356 not found: ID does not exist" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.583848 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb50da0-eb06-4959-b8da-70919924f77e" path="/var/lib/kubelet/pods/cdb50da0-eb06-4959-b8da-70919924f77e/volumes" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.584486 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd62bdde-a6c1-42b3-9585-ba64c63cbb51" path="/var/lib/kubelet/pods/dd62bdde-a6c1-42b3-9585-ba64c63cbb51/volumes" Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.953014 5120 generic.go:358] "Generic (PLEG): container finished" podID="f8707e23-b20a-4547-938b-1938b7cd5b7d" containerID="e0caf6d3b243b2fa89908211b540dda30bd6d0236528a194c92a37b33ff165ff" exitCode=0 Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.953061 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerDied","Data":"e0caf6d3b243b2fa89908211b540dda30bd6d0236528a194c92a37b33ff165ff"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.953140 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"f428aeccccecc03c7c096b1f1e17d299174a54c34bdac3db8c4a6dac0ba6fe50"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.955809 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" event={"ID":"2b921c3f-0298-48a5-8020-2e7932ce381a","Type":"ContainerStarted","Data":"3a6cfabe288fa7b7228174bdc16aef8fe815b2268b6878d63decf8b6cb014b56"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.955898 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" event={"ID":"2b921c3f-0298-48a5-8020-2e7932ce381a","Type":"ContainerStarted","Data":"36eacf406149b00c20107c824b86dcae1d9ff059fb4df9b04bef692ac0a22ec0"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.955925 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" event={"ID":"2b921c3f-0298-48a5-8020-2e7932ce381a","Type":"ContainerStarted","Data":"7c1049094d4ba6aa9fefeecfbea8f552b69cabdd477a505997ca758580406434"} Jan 22 11:59:03 crc kubenswrapper[5120]: I0122 11:59:03.999318 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lvft9" podStartSLOduration=2.999301275 podStartE2EDuration="2.999301275s" podCreationTimestamp="2026-01-22 11:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:59:03.99702794 +0000 UTC m=+678.740976281" watchObservedRunningTime="2026-01-22 11:59:03.999301275 +0000 UTC m=+678.743249616" Jan 22 11:59:04 crc kubenswrapper[5120]: I0122 11:59:04.967216 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"5f054527e26d47e40b7a43de934cfc37cedf6605143dcc42603ff1b601db56a6"} Jan 22 11:59:04 crc kubenswrapper[5120]: I0122 11:59:04.967279 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"153387bf933f8c7599502d58d493aeb3e9ba0d9dbdf2d324a911d357c63600ad"} Jan 22 11:59:05 crc kubenswrapper[5120]: I0122 11:59:05.976253 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"6580c9e2a4b24fa001eba7200992a35a59c292453e9c13d305be3dd9994ce202"} Jan 22 11:59:05 crc kubenswrapper[5120]: I0122 11:59:05.976813 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"800fad52f15b7caf05dbe96e4ad2f4bb01ae5ea793cd18575f189b6b5e954311"} Jan 22 11:59:07 crc kubenswrapper[5120]: I0122 11:59:07.006981 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"c38d96fc1b850a9d79b7cd2331227d6f44907167a0f738fb38011fc8c35f768c"} Jan 22 11:59:07 crc kubenswrapper[5120]: I0122 11:59:07.007073 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"e990c7cef1b03bb5ecf36bd0de41772488021e3ffb1ea0fc469a3986800dba3e"} Jan 22 11:59:10 crc kubenswrapper[5120]: I0122 11:59:10.061601 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"d585533ddd988ab9c76820855f8f988de13240fa743a200b900978f79d19744e"} Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.097378 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" event={"ID":"f8707e23-b20a-4547-938b-1938b7cd5b7d","Type":"ContainerStarted","Data":"97d0ab39537aca068ed5f7d070b34e9a0bb68c3a186b2abd75f6ce81d7d01f2f"} Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.098201 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.098246 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.098273 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.134243 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.134361 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 11:59:13 crc kubenswrapper[5120]: I0122 11:59:13.154667 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" podStartSLOduration=11.154644104 podStartE2EDuration="11.154644104s" podCreationTimestamp="2026-01-22 11:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 11:59:13.151843626 +0000 UTC m=+687.895791977" watchObservedRunningTime="2026-01-22 11:59:13.154644104 +0000 UTC m=+687.898592445" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.094135 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.094926 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.095036 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.095897 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 11:59:32 crc kubenswrapper[5120]: I0122 11:59:32.095973 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7" gracePeriod=600 Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.118105 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7" exitCode=0 Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.118233 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7"} Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.119098 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a"} Jan 22 11:59:33 crc kubenswrapper[5120]: I0122 11:59:33.119132 5120 scope.go:117] "RemoveContainer" containerID="e857eb1297fb678314f51a1be1533aaadb53a0e5183e6c42cc64ea1b07667a10" Jan 22 11:59:45 crc kubenswrapper[5120]: I0122 11:59:45.149655 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9xdkb" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.140415 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.158501 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.172534 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.172581 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.172832 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.173571 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.175157 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.175167 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.176141 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.176298 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.176181 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225143 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225213 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"auto-csr-approver-29484720-f92nq\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225363 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.225537 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327138 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327455 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327569 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.327672 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"auto-csr-approver-29484720-f92nq\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.328618 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.337638 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.347522 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"auto-csr-approver-29484720-f92nq\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.349216 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"collect-profiles-29484720-bt5vq\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.503208 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.513584 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.733391 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:00:00 crc kubenswrapper[5120]: I0122 12:00:00.783848 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:00:00 crc kubenswrapper[5120]: W0122 12:00:00.790850 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd57ca8ee_4b8e_4b45_983a_11332a457cf8.slice/crio-555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d WatchSource:0}: Error finding container 555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d: Status 404 returned error can't find the container with id 555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.340223 5120 generic.go:358] "Generic (PLEG): container finished" podID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerID="73df242a325822ccf1cead216fb72d99d7eb4b7f40cfe98bdeb214c25306e468" exitCode=0 Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.340835 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" event={"ID":"d57ca8ee-4b8e-4b45-983a-11332a457cf8","Type":"ContainerDied","Data":"73df242a325822ccf1cead216fb72d99d7eb4b7f40cfe98bdeb214c25306e468"} Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.340870 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" event={"ID":"d57ca8ee-4b8e-4b45-983a-11332a457cf8","Type":"ContainerStarted","Data":"555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d"} Jan 22 12:00:01 crc kubenswrapper[5120]: I0122 12:00:01.342344 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484720-f92nq" event={"ID":"ee0a1780-1d96-46a3-8386-55404b6d1299","Type":"ContainerStarted","Data":"1eb12823267a042c8a078657072b8ba02586a08840e4a77c20ef76a66c21b12d"} Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.543518 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.657571 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") pod \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.657634 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") pod \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.657727 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") pod \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\" (UID: \"d57ca8ee-4b8e-4b45-983a-11332a457cf8\") " Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.658709 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume" (OuterVolumeSpecName: "config-volume") pod "d57ca8ee-4b8e-4b45-983a-11332a457cf8" (UID: "d57ca8ee-4b8e-4b45-983a-11332a457cf8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.663985 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb" (OuterVolumeSpecName: "kube-api-access-7sjkb") pod "d57ca8ee-4b8e-4b45-983a-11332a457cf8" (UID: "d57ca8ee-4b8e-4b45-983a-11332a457cf8"). InnerVolumeSpecName "kube-api-access-7sjkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.664607 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d57ca8ee-4b8e-4b45-983a-11332a457cf8" (UID: "d57ca8ee-4b8e-4b45-983a-11332a457cf8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.759226 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d57ca8ee-4b8e-4b45-983a-11332a457cf8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.759265 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d57ca8ee-4b8e-4b45-983a-11332a457cf8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:02 crc kubenswrapper[5120]: I0122 12:00:02.759277 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7sjkb\" (UniqueName: \"kubernetes.io/projected/d57ca8ee-4b8e-4b45-983a-11332a457cf8-kube-api-access-7sjkb\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:03 crc kubenswrapper[5120]: I0122 12:00:03.359836 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" Jan 22 12:00:03 crc kubenswrapper[5120]: I0122 12:00:03.359886 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq" event={"ID":"d57ca8ee-4b8e-4b45-983a-11332a457cf8","Type":"ContainerDied","Data":"555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d"} Jan 22 12:00:03 crc kubenswrapper[5120]: I0122 12:00:03.360638 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555a31de425b735c917501ecc82650fcb3d09292cc7728c2732b97d1376c6b2d" Jan 22 12:00:10 crc kubenswrapper[5120]: I0122 12:00:10.955449 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 12:00:10 crc kubenswrapper[5120]: I0122 12:00:10.956377 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pn4sg" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" containerID="cri-o://8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" gracePeriod=30 Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.303311 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408369 5120 generic.go:358] "Generic (PLEG): container finished" podID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" exitCode=0 Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408501 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pn4sg" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408657 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce"} Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408706 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pn4sg" event={"ID":"db99c964-abd0-4bc6-a71a-79a9c5a3c718","Type":"ContainerDied","Data":"be77ef2cfeb1733dbed252c7c38f2239d4e5745805f1f6b72bcb11727aa3ba6e"} Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.408729 5120 scope.go:117] "RemoveContainer" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.429089 5120 scope.go:117] "RemoveContainer" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.447903 5120 scope.go:117] "RemoveContainer" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.478852 5120 scope.go:117] "RemoveContainer" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" Jan 22 12:00:11 crc kubenswrapper[5120]: E0122 12:00:11.479502 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce\": container with ID starting with 8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce not found: ID does not exist" containerID="8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.479718 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce"} err="failed to get container status \"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce\": rpc error: code = NotFound desc = could not find container \"8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce\": container with ID starting with 8b163792bb97360e66ff49a6671a168e8360ed01068a2e1a81223660edca82ce not found: ID does not exist" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.479863 5120 scope.go:117] "RemoveContainer" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" Jan 22 12:00:11 crc kubenswrapper[5120]: E0122 12:00:11.480641 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996\": container with ID starting with 313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996 not found: ID does not exist" containerID="313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.480705 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996"} err="failed to get container status \"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996\": rpc error: code = NotFound desc = could not find container \"313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996\": container with ID starting with 313d44d3fc66f67b7d63b858b58681ab05c602e2795d9b9acc7c77eaa45c2996 not found: ID does not exist" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.480742 5120 scope.go:117] "RemoveContainer" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" Jan 22 12:00:11 crc kubenswrapper[5120]: E0122 12:00:11.481333 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea\": container with ID starting with 23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea not found: ID does not exist" containerID="23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.481520 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea"} err="failed to get container status \"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea\": rpc error: code = NotFound desc = could not find container \"23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea\": container with ID starting with 23305ca08eff0d7027d5b25fdf18268d3a1bc74ff0ad9a6abad880b0f080c4ea not found: ID does not exist" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.496783 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") pod \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.497022 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") pod \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.497184 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") pod \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\" (UID: \"db99c964-abd0-4bc6-a71a-79a9c5a3c718\") " Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.498523 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities" (OuterVolumeSpecName: "utilities") pod "db99c964-abd0-4bc6-a71a-79a9c5a3c718" (UID: "db99c964-abd0-4bc6-a71a-79a9c5a3c718"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.503741 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb" (OuterVolumeSpecName: "kube-api-access-qfmsb") pod "db99c964-abd0-4bc6-a71a-79a9c5a3c718" (UID: "db99c964-abd0-4bc6-a71a-79a9c5a3c718"). InnerVolumeSpecName "kube-api-access-qfmsb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.511792 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db99c964-abd0-4bc6-a71a-79a9c5a3c718" (UID: "db99c964-abd0-4bc6-a71a-79a9c5a3c718"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.599343 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.599418 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db99c964-abd0-4bc6-a71a-79a9c5a3c718-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.599435 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qfmsb\" (UniqueName: \"kubernetes.io/projected/db99c964-abd0-4bc6-a71a-79a9c5a3c718-kube-api-access-qfmsb\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.732698 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 12:00:11 crc kubenswrapper[5120]: I0122 12:00:11.735864 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pn4sg"] Jan 22 12:00:13 crc kubenswrapper[5120]: I0122 12:00:13.578691 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" path="/var/lib/kubelet/pods/db99c964-abd0-4bc6-a71a-79a9c5a3c718/volumes" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.082364 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b"] Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083108 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083126 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083140 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerName="collect-profiles" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083147 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerName="collect-profiles" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083159 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-content" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083166 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-content" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083177 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-utilities" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083185 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="extract-utilities" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083339 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" containerName="collect-profiles" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.083355 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="db99c964-abd0-4bc6-a71a-79a9c5a3c718" containerName="registry-server" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.196732 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b"] Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.196921 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.199694 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.251187 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.251572 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.251674 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.352818 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353435 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353607 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.353912 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.372386 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.515558 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:15 crc kubenswrapper[5120]: I0122 12:00:15.750201 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b"] Jan 22 12:00:16 crc kubenswrapper[5120]: I0122 12:00:16.450409 5120 generic.go:358] "Generic (PLEG): container finished" podID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerID="2b450f0d994340ebedd7d257fe63748df13451d9c058ec3625914a0aaf1d9d77" exitCode=0 Jan 22 12:00:16 crc kubenswrapper[5120]: I0122 12:00:16.450493 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"2b450f0d994340ebedd7d257fe63748df13451d9c058ec3625914a0aaf1d9d77"} Jan 22 12:00:16 crc kubenswrapper[5120]: I0122 12:00:16.450923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerStarted","Data":"3cbd8e79b0bfe9d5f65f0fa9a41114f503e404da92d845186ed8ae61cb433ac6"} Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.037744 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.044859 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.056764 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.101207 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.101400 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.101440 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.202784 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.202902 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.202927 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.203394 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.203442 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.223855 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"redhat-operators-gppd2\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.385911 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.482224 5120 generic.go:358] "Generic (PLEG): container finished" podID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerID="facf0e3289b882e9251e54633940bb8908cb9734e29c7069dbcf2f9c7d82dea8" exitCode=0 Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.482398 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"facf0e3289b882e9251e54633940bb8908cb9734e29c7069dbcf2f9c7d82dea8"} Jan 22 12:00:18 crc kubenswrapper[5120]: I0122 12:00:18.830403 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.490470 5120 generic.go:358] "Generic (PLEG): container finished" podID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerID="99517dae7f9a7b3bdfa32446a1a6d06e3af1f8eddda207797f368f264143f4c6" exitCode=0 Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.490573 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"99517dae7f9a7b3bdfa32446a1a6d06e3af1f8eddda207797f368f264143f4c6"} Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.492633 5120 generic.go:358] "Generic (PLEG): container finished" podID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" exitCode=0 Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.492738 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e"} Jan 22 12:00:19 crc kubenswrapper[5120]: I0122 12:00:19.492792 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerStarted","Data":"95dee903a35163143fb71dae252bdc46fab906f21721e1c598215d1ffc26c24e"} Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.801778 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.840613 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") pod \"04591ad2-b41c-420f-9328-a9ff515b4e1e\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.840873 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") pod \"04591ad2-b41c-420f-9328-a9ff515b4e1e\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.841021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") pod \"04591ad2-b41c-420f-9328-a9ff515b4e1e\" (UID: \"04591ad2-b41c-420f-9328-a9ff515b4e1e\") " Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.845243 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle" (OuterVolumeSpecName: "bundle") pod "04591ad2-b41c-420f-9328-a9ff515b4e1e" (UID: "04591ad2-b41c-420f-9328-a9ff515b4e1e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.856653 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util" (OuterVolumeSpecName: "util") pod "04591ad2-b41c-420f-9328-a9ff515b4e1e" (UID: "04591ad2-b41c-420f-9328-a9ff515b4e1e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.867447 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4" (OuterVolumeSpecName: "kube-api-access-2xpn4") pod "04591ad2-b41c-420f-9328-a9ff515b4e1e" (UID: "04591ad2-b41c-420f-9328-a9ff515b4e1e"). InnerVolumeSpecName "kube-api-access-2xpn4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.943917 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2xpn4\" (UniqueName: \"kubernetes.io/projected/04591ad2-b41c-420f-9328-a9ff515b4e1e-kube-api-access-2xpn4\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.944027 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:20 crc kubenswrapper[5120]: I0122 12:00:20.944049 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/04591ad2-b41c-420f-9328-a9ff515b4e1e-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.515796 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" event={"ID":"04591ad2-b41c-420f-9328-a9ff515b4e1e","Type":"ContainerDied","Data":"3cbd8e79b0bfe9d5f65f0fa9a41114f503e404da92d845186ed8ae61cb433ac6"} Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.516890 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cbd8e79b0bfe9d5f65f0fa9a41114f503e404da92d845186ed8ae61cb433ac6" Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.515850 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b" Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.519020 5120 generic.go:358] "Generic (PLEG): container finished" podID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" exitCode=0 Jan 22 12:00:21 crc kubenswrapper[5120]: I0122 12:00:21.519096 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b"} Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.531185 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerStarted","Data":"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd"} Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.534543 5120 generic.go:358] "Generic (PLEG): container finished" podID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerID="a76aaf951602603ba06dd3faa64300e242c288026ffa56088b05a6f5a164c1d1" exitCode=0 Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.534677 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484720-f92nq" event={"ID":"ee0a1780-1d96-46a3-8386-55404b6d1299","Type":"ContainerDied","Data":"a76aaf951602603ba06dd3faa64300e242c288026ffa56088b05a6f5a164c1d1"} Jan 22 12:00:22 crc kubenswrapper[5120]: I0122 12:00:22.566037 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gppd2" podStartSLOduration=3.66442423 podStartE2EDuration="4.566002632s" podCreationTimestamp="2026-01-22 12:00:18 +0000 UTC" firstStartedPulling="2026-01-22 12:00:19.49386963 +0000 UTC m=+754.237817991" lastFinishedPulling="2026-01-22 12:00:20.395448012 +0000 UTC m=+755.139396393" observedRunningTime="2026-01-22 12:00:22.560281133 +0000 UTC m=+757.304229534" watchObservedRunningTime="2026-01-22 12:00:22.566002632 +0000 UTC m=+757.309951013" Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.800172 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.886523 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") pod \"ee0a1780-1d96-46a3-8386-55404b6d1299\" (UID: \"ee0a1780-1d96-46a3-8386-55404b6d1299\") " Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.898731 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25" (OuterVolumeSpecName: "kube-api-access-6zq25") pod "ee0a1780-1d96-46a3-8386-55404b6d1299" (UID: "ee0a1780-1d96-46a3-8386-55404b6d1299"). InnerVolumeSpecName "kube-api-access-6zq25". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:23 crc kubenswrapper[5120]: I0122 12:00:23.989230 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zq25\" (UniqueName: \"kubernetes.io/projected/ee0a1780-1d96-46a3-8386-55404b6d1299-kube-api-access-6zq25\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076027 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6"] Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076803 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="pull" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076829 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="pull" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076857 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerName="oc" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076866 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerName="oc" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076879 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="util" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076887 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="util" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076905 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="extract" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.076912 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="extract" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.077059 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" containerName="oc" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.077085 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="04591ad2-b41c-420f-9328-a9ff515b4e1e" containerName="extract" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.083893 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.088540 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6"] Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.090451 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.192845 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.193669 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.193946 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296077 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296212 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296398 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.296761 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.297134 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.320035 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.409384 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.552499 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484720-f92nq" event={"ID":"ee0a1780-1d96-46a3-8386-55404b6d1299","Type":"ContainerDied","Data":"1eb12823267a042c8a078657072b8ba02586a08840e4a77c20ef76a66c21b12d"} Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.552568 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb12823267a042c8a078657072b8ba02586a08840e4a77c20ef76a66c21b12d" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.552660 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484720-f92nq" Jan 22 12:00:24 crc kubenswrapper[5120]: I0122 12:00:24.651163 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6"] Jan 22 12:00:24 crc kubenswrapper[5120]: W0122 12:00:24.656307 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae07b37_44a2_4e47_abb9_5587cb866c3b.slice/crio-91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44 WatchSource:0}: Error finding container 91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44: Status 404 returned error can't find the container with id 91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44 Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.097272 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn"] Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.105037 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.109046 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn"] Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.211206 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.211377 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.211571 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.313679 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.313761 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.313973 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.314586 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.315093 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.340988 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.431437 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.566208 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerID="5a7eac1401ed4fb13883b23933c3760dfa0683d239946b53867596a24b0b4cff" exitCode=0 Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.567202 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"5a7eac1401ed4fb13883b23933c3760dfa0683d239946b53867596a24b0b4cff"} Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.567718 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerStarted","Data":"91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44"} Jan 22 12:00:25 crc kubenswrapper[5120]: I0122 12:00:25.702004 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn"] Jan 22 12:00:26 crc kubenswrapper[5120]: I0122 12:00:26.577101 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerStarted","Data":"0b83c7bce79b0ae49b716cede97a00d45ebfb57b219ce5ae3b614cc43f978569"} Jan 22 12:00:26 crc kubenswrapper[5120]: I0122 12:00:26.577567 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerStarted","Data":"ce9e278098fe76d57c98f8549cea11c041bae3dca21cc3da02281b6c0192fbf5"} Jan 22 12:00:27 crc kubenswrapper[5120]: I0122 12:00:27.589001 5120 generic.go:358] "Generic (PLEG): container finished" podID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerID="0b83c7bce79b0ae49b716cede97a00d45ebfb57b219ce5ae3b614cc43f978569" exitCode=0 Jan 22 12:00:27 crc kubenswrapper[5120]: I0122 12:00:27.589166 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"0b83c7bce79b0ae49b716cede97a00d45ebfb57b219ce5ae3b614cc43f978569"} Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.386092 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.386246 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.459516 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.611148 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerID="b37c38b1002ae37fd9ff7c238483d69f23a331a6f3e37e5457de3788313dbb4b" exitCode=0 Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.611271 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"b37c38b1002ae37fd9ff7c238483d69f23a331a6f3e37e5457de3788313dbb4b"} Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.719134 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:28 crc kubenswrapper[5120]: I0122 12:00:28.841797 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.035636 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.035808 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.188074 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.188153 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.188185 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.290000 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.290083 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.290113 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.291485 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.291746 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.336732 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"certified-operators-zkkb7\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.387275 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.676222 5120 generic.go:358] "Generic (PLEG): container finished" podID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerID="8a17a3cad236fdd8f7ff096c755d24c506711e1ef238f52220a595513ba9515d" exitCode=0 Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.678537 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"8a17a3cad236fdd8f7ff096c755d24c506711e1ef238f52220a595513ba9515d"} Jan 22 12:00:29 crc kubenswrapper[5120]: I0122 12:00:29.932122 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.432716 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz"] Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.488994 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz"] Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.489207 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.609147 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.609213 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.609245 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.686355 5120 generic.go:358] "Generic (PLEG): container finished" podID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerID="04e24cb4471e14d51fe8e02cf81f81f2adb50f52b16ddc7ba687333846cda4bb" exitCode=0 Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.686502 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"04e24cb4471e14d51fe8e02cf81f81f2adb50f52b16ddc7ba687333846cda4bb"} Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.686543 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerStarted","Data":"265be79110a72ebba1156eec2a58e1e49b4bd06b96371e08bd346f68e3921b3b"} Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.691438 5120 generic.go:358] "Generic (PLEG): container finished" podID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerID="8b2d8fb2b5ba83e645b5a7d4d15c755bd2b03fec8b886275e1e00e02c2fe4b16" exitCode=0 Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.691607 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"8b2d8fb2b5ba83e645b5a7d4d15c755bd2b03fec8b886275e1e00e02c2fe4b16"} Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.713196 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.713495 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.713672 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.714271 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.714384 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.754025 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:30 crc kubenswrapper[5120]: I0122 12:00:30.952228 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.103212 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.222858 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") pod \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.223140 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") pod \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.223172 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") pod \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\" (UID: \"6ae07b37-44a2-4e47-abb9-5587cb866c3b\") " Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.224290 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle" (OuterVolumeSpecName: "bundle") pod "6ae07b37-44a2-4e47-abb9-5587cb866c3b" (UID: "6ae07b37-44a2-4e47-abb9-5587cb866c3b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.233103 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util" (OuterVolumeSpecName: "util") pod "6ae07b37-44a2-4e47-abb9-5587cb866c3b" (UID: "6ae07b37-44a2-4e47-abb9-5587cb866c3b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.252205 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq" (OuterVolumeSpecName: "kube-api-access-qgdjq") pod "6ae07b37-44a2-4e47-abb9-5587cb866c3b" (UID: "6ae07b37-44a2-4e47-abb9-5587cb866c3b"). InnerVolumeSpecName "kube-api-access-qgdjq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.323479 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz"] Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.324344 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.324365 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ae07b37-44a2-4e47-abb9-5587cb866c3b-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.324377 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgdjq\" (UniqueName: \"kubernetes.io/projected/6ae07b37-44a2-4e47-abb9-5587cb866c3b-kube-api-access-qgdjq\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.698449 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerStarted","Data":"e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.701118 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerStarted","Data":"2230e816d937f4b0f1d284a8c7efbd0a7ba111f1bf9693e2f9b6418177a7f0bd"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.701147 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerStarted","Data":"0acb727f11a4fa06c57cb8d1ffde7d59f3b3547f9a2d5b94ff706f6704b9f81a"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.709315 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerStarted","Data":"af13e96cefb9396e0b0ec76ac06165a744b48f2baf953b0f5556adb371a150da"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.726479 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" event={"ID":"6ae07b37-44a2-4e47-abb9-5587cb866c3b","Type":"ContainerDied","Data":"91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44"} Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.726526 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91912c78c469cc18ad63184fb62a893742329c43cbe307b343e4eae7acbe1b44" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.726589 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6" Jan 22 12:00:31 crc kubenswrapper[5120]: I0122 12:00:31.834326 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" podStartSLOduration=4.888308789 podStartE2EDuration="6.834295841s" podCreationTimestamp="2026-01-22 12:00:25 +0000 UTC" firstStartedPulling="2026-01-22 12:00:27.592418358 +0000 UTC m=+762.336366699" lastFinishedPulling="2026-01-22 12:00:29.53840541 +0000 UTC m=+764.282353751" observedRunningTime="2026-01-22 12:00:31.833859181 +0000 UTC m=+766.577807532" watchObservedRunningTime="2026-01-22 12:00:31.834295841 +0000 UTC m=+766.578244182" Jan 22 12:00:32 crc kubenswrapper[5120]: I0122 12:00:32.736418 5120 generic.go:358] "Generic (PLEG): container finished" podID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerID="af13e96cefb9396e0b0ec76ac06165a744b48f2baf953b0f5556adb371a150da" exitCode=0 Jan 22 12:00:32 crc kubenswrapper[5120]: I0122 12:00:32.736504 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"af13e96cefb9396e0b0ec76ac06165a744b48f2baf953b0f5556adb371a150da"} Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.219595 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.219972 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gppd2" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" containerID="cri-o://c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" gracePeriod=2 Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.746146 5120 generic.go:358] "Generic (PLEG): container finished" podID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerID="e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a" exitCode=0 Jan 22 12:00:33 crc kubenswrapper[5120]: I0122 12:00:33.746223 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.127783 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.272980 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.277665 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") pod \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.277794 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") pod \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.277944 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") pod \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\" (UID: \"6451a1e2-e63d-4a21-bab9-c97f9b2c9236\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.278655 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle" (OuterVolumeSpecName: "bundle") pod "6451a1e2-e63d-4a21-bab9-c97f9b2c9236" (UID: "6451a1e2-e63d-4a21-bab9-c97f9b2c9236"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.286517 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util" (OuterVolumeSpecName: "util") pod "6451a1e2-e63d-4a21-bab9-c97f9b2c9236" (UID: "6451a1e2-e63d-4a21-bab9-c97f9b2c9236"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.289555 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn" (OuterVolumeSpecName: "kube-api-access-5ddhn") pod "6451a1e2-e63d-4a21-bab9-c97f9b2c9236" (UID: "6451a1e2-e63d-4a21-bab9-c97f9b2c9236"). InnerVolumeSpecName "kube-api-access-5ddhn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.379564 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") pod \"23170abf-1fa3-4863-80e8-d7606fdeae60\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380229 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") pod \"23170abf-1fa3-4863-80e8-d7606fdeae60\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380403 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") pod \"23170abf-1fa3-4863-80e8-d7606fdeae60\" (UID: \"23170abf-1fa3-4863-80e8-d7606fdeae60\") " Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380812 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5ddhn\" (UniqueName: \"kubernetes.io/projected/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-kube-api-access-5ddhn\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.380915 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.381030 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6451a1e2-e63d-4a21-bab9-c97f9b2c9236-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.381582 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities" (OuterVolumeSpecName: "utilities") pod "23170abf-1fa3-4863-80e8-d7606fdeae60" (UID: "23170abf-1fa3-4863-80e8-d7606fdeae60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.385330 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt" (OuterVolumeSpecName: "kube-api-access-v5npt") pod "23170abf-1fa3-4863-80e8-d7606fdeae60" (UID: "23170abf-1fa3-4863-80e8-d7606fdeae60"). InnerVolumeSpecName "kube-api-access-v5npt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.439784 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440562 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440585 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440602 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440609 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440623 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440628 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440642 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440647 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="pull" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440655 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-utilities" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440661 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-utilities" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440669 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440676 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440691 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440697 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440708 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-content" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440714 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="extract-content" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440729 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440734 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="util" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440846 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerName="registry-server" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440859 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ae07b37-44a2-4e47-abb9-5587cb866c3b" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.440872 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6451a1e2-e63d-4a21-bab9-c97f9b2c9236" containerName="extract" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.482831 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v5npt\" (UniqueName: \"kubernetes.io/projected/23170abf-1fa3-4863-80e8-d7606fdeae60-kube-api-access-v5npt\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.482868 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.496282 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23170abf-1fa3-4863-80e8-d7606fdeae60" (UID: "23170abf-1fa3-4863-80e8-d7606fdeae60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.583787 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23170abf-1fa3-4863-80e8-d7606fdeae60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.700170 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.700414 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.700698 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.706321 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-d6h5d\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.707019 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.707860 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.708188 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.712247 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.712308 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.712386 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.714082 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-r9tgh\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.714804 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.723726 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.768233 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerStarted","Data":"d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.771055 5120 generic.go:358] "Generic (PLEG): container finished" podID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerID="2230e816d937f4b0f1d284a8c7efbd0a7ba111f1bf9693e2f9b6418177a7f0bd" exitCode=0 Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.771135 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"2230e816d937f4b0f1d284a8c7efbd0a7ba111f1bf9693e2f9b6418177a7f0bd"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777235 5120 generic.go:358] "Generic (PLEG): container finished" podID="23170abf-1fa3-4863-80e8-d7606fdeae60" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" exitCode=0 Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777308 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777329 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gppd2" event={"ID":"23170abf-1fa3-4863-80e8-d7606fdeae60","Type":"ContainerDied","Data":"95dee903a35163143fb71dae252bdc46fab906f21721e1c598215d1ffc26c24e"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777351 5120 scope.go:117] "RemoveContainer" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.777376 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gppd2" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.783369 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" event={"ID":"6451a1e2-e63d-4a21-bab9-c97f9b2c9236","Type":"ContainerDied","Data":"ce9e278098fe76d57c98f8549cea11c041bae3dca21cc3da02281b6c0192fbf5"} Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.783395 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce9e278098fe76d57c98f8549cea11c041bae3dca21cc3da02281b6c0192fbf5" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.783485 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.798711 5120 scope.go:117] "RemoveContainer" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.819054 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zkkb7" podStartSLOduration=6.097870905 podStartE2EDuration="6.819033305s" podCreationTimestamp="2026-01-22 12:00:28 +0000 UTC" firstStartedPulling="2026-01-22 12:00:30.687820996 +0000 UTC m=+765.431769327" lastFinishedPulling="2026-01-22 12:00:31.408983386 +0000 UTC m=+766.152931727" observedRunningTime="2026-01-22 12:00:34.814835683 +0000 UTC m=+769.558784024" watchObservedRunningTime="2026-01-22 12:00:34.819033305 +0000 UTC m=+769.562981646" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.856808 5120 scope.go:117] "RemoveContainer" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.861219 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.879234 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gppd2"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.888461 5120 scope.go:117] "RemoveContainer" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" Jan 22 12:00:34 crc kubenswrapper[5120]: E0122 12:00:34.890096 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd\": container with ID starting with c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd not found: ID does not exist" containerID="c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890163 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd"} err="failed to get container status \"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd\": rpc error: code = NotFound desc = could not find container \"c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd\": container with ID starting with c8583c0d66e4a79dcf5605df70356896f577ca9dfb1ef4bbebd62aabfc59bffd not found: ID does not exist" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890196 5120 scope.go:117] "RemoveContainer" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" Jan 22 12:00:34 crc kubenswrapper[5120]: E0122 12:00:34.890492 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b\": container with ID starting with 66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b not found: ID does not exist" containerID="66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890507 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b"} err="failed to get container status \"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b\": rpc error: code = NotFound desc = could not find container \"66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b\": container with ID starting with 66e3ce0d5b91442255cccccbc69a52d9b1cd60932f6ea9f75d5c1e6c0d86293b not found: ID does not exist" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890523 5120 scope.go:117] "RemoveContainer" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890637 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtq6d\" (UniqueName: \"kubernetes.io/projected/6f74f225-731c-48b9-a98d-36a191b5ff41-kube-api-access-xtq6d\") pod \"obo-prometheus-operator-9bc85b4bf-kjb4b\" (UID: \"6f74f225-731c-48b9-a98d-36a191b5ff41\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:34 crc kubenswrapper[5120]: E0122 12:00:34.890678 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e\": container with ID starting with c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e not found: ID does not exist" containerID="c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890696 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e"} err="failed to get container status \"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e\": rpc error: code = NotFound desc = could not find container \"c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e\": container with ID starting with c96918bf02933a8bbbdc17083708eca8a326f1b2c1a370b9d5b4d24b8940218e not found: ID does not exist" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890676 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.890983 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.891042 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.891072 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.896507 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-s6759"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.921869 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-s6759"] Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.922122 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.926423 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.926563 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-k5xkx\"" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993132 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993225 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993257 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993345 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xtq6d\" (UniqueName: \"kubernetes.io/projected/6f74f225-731c-48b9-a98d-36a191b5ff41-kube-api-access-xtq6d\") pod \"obo-prometheus-operator-9bc85b4bf-kjb4b\" (UID: \"6f74f225-731c-48b9-a98d-36a191b5ff41\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:34 crc kubenswrapper[5120]: I0122 12:00:34.993372 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.001254 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.001262 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.002387 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6924228f-579c-408a-8a40-b103b066446d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7\" (UID: \"6924228f-579c-408a-8a40-b103b066446d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.003313 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e68b911-b2b1-4a04-a86f-91742f22bad9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb\" (UID: \"2e68b911-b2b1-4a04-a86f-91742f22bad9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.031838 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtq6d\" (UniqueName: \"kubernetes.io/projected/6f74f225-731c-48b9-a98d-36a191b5ff41-kube-api-access-xtq6d\") pod \"obo-prometheus-operator-9bc85b4bf-kjb4b\" (UID: \"6f74f225-731c-48b9-a98d-36a191b5ff41\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.038369 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.046489 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.076083 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-n9lhg"] Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.094602 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-n9lhg"] Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.094835 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.095343 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9brd\" (UniqueName: \"kubernetes.io/projected/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-kube-api-access-d9brd\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.095438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-observability-operator-tls\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.099051 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-k442f\"" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.197912 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-observability-operator-tls\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.198505 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-openshift-service-ca\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.198536 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8x8h\" (UniqueName: \"kubernetes.io/projected/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-kube-api-access-h8x8h\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.198661 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9brd\" (UniqueName: \"kubernetes.io/projected/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-kube-api-access-d9brd\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.205235 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-observability-operator-tls\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.232632 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9brd\" (UniqueName: \"kubernetes.io/projected/da59fdd4-fe7a-4efd-b136-79a9b05d38b8-kube-api-access-d9brd\") pod \"observability-operator-85c68dddb-s6759\" (UID: \"da59fdd4-fe7a-4efd-b136-79a9b05d38b8\") " pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.241075 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.305820 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-openshift-service-ca\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.305904 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8x8h\" (UniqueName: \"kubernetes.io/projected/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-kube-api-access-h8x8h\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.307506 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-openshift-service-ca\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.328273 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.333177 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8x8h\" (UniqueName: \"kubernetes.io/projected/da376ee2-11ae-493e-9e4d-d8ac6fadfb53-kube-api-access-h8x8h\") pod \"perses-operator-669c9f96b5-n9lhg\" (UID: \"da376ee2-11ae-493e-9e4d-d8ac6fadfb53\") " pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.421278 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.460981 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.472342 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6924228f_579c_408a_8a40_b103b066446d.slice/crio-7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631 WatchSource:0}: Error finding container 7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631: Status 404 returned error can't find the container with id 7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631 Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.523611 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.557136 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e68b911_b2b1_4a04_a86f_91742f22bad9.slice/crio-dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd WatchSource:0}: Error finding container dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd: Status 404 returned error can't find the container with id dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.582287 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23170abf-1fa3-4863-80e8-d7606fdeae60" path="/var/lib/kubelet/pods/23170abf-1fa3-4863-80e8-d7606fdeae60/volumes" Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.799113 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" event={"ID":"6924228f-579c-408a-8a40-b103b066446d","Type":"ContainerStarted","Data":"7eac22ecd403316ce17ef69f88757e0edcbf344ccb9f22fd5f70321684c02631"} Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.802238 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" event={"ID":"2e68b911-b2b1-4a04-a86f-91742f22bad9","Type":"ContainerStarted","Data":"dbe5f4e426824e4c95c4f1f8bfb0a8459f84f8dad672541dc5bb19ab4d2396cd"} Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.851464 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b"] Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.865082 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-s6759"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.871894 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f74f225_731c_48b9_a98d_36a191b5ff41.slice/crio-9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4 WatchSource:0}: Error finding container 9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4: Status 404 returned error can't find the container with id 9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4 Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.877209 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda59fdd4_fe7a_4efd_b136_79a9b05d38b8.slice/crio-7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5 WatchSource:0}: Error finding container 7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5: Status 404 returned error can't find the container with id 7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5 Jan 22 12:00:35 crc kubenswrapper[5120]: I0122 12:00:35.951773 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-n9lhg"] Jan 22 12:00:35 crc kubenswrapper[5120]: W0122 12:00:35.956787 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda376ee2_11ae_493e_9e4d_d8ac6fadfb53.slice/crio-516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60 WatchSource:0}: Error finding container 516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60: Status 404 returned error can't find the container with id 516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60 Jan 22 12:00:36 crc kubenswrapper[5120]: I0122 12:00:36.812085 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-s6759" event={"ID":"da59fdd4-fe7a-4efd-b136-79a9b05d38b8","Type":"ContainerStarted","Data":"7deff7a19d8403223806e1c06dff129d5801b8ca71d739b85eeeae458aff43b5"} Jan 22 12:00:36 crc kubenswrapper[5120]: I0122 12:00:36.823435 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" event={"ID":"da376ee2-11ae-493e-9e4d-d8ac6fadfb53","Type":"ContainerStarted","Data":"516239d3df01ec41ea98d35b66c832b8c2fd0d37be57343861c4779017db0c60"} Jan 22 12:00:36 crc kubenswrapper[5120]: I0122 12:00:36.825154 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" event={"ID":"6f74f225-731c-48b9-a98d-36a191b5ff41","Type":"ContainerStarted","Data":"9b39ee2c2388a48eca1a17ae7985a3d3df8bfe0594be7ecdb12aa335443882a4"} Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.388821 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.389887 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.487855 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:39 crc kubenswrapper[5120]: I0122 12:00:39.920831 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.764496 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-796f77fbdf-t9sbr"] Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.835680 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-796f77fbdf-t9sbr"] Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.835878 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.841721 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.841924 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-r4sfd\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.842127 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.842308 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.974527 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-webhook-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.974911 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-apiservice-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:40 crc kubenswrapper[5120]: I0122 12:00:40.975109 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4989\" (UniqueName: \"kubernetes.io/projected/164c4d54-e519-4e1e-9e4b-3e2881312d55-kube-api-access-j4989\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.076778 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j4989\" (UniqueName: \"kubernetes.io/projected/164c4d54-e519-4e1e-9e4b-3e2881312d55-kube-api-access-j4989\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.076937 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-webhook-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.079534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-apiservice-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.089428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-apiservice-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.098621 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/164c4d54-e519-4e1e-9e4b-3e2881312d55-webhook-cert\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.156920 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4989\" (UniqueName: \"kubernetes.io/projected/164c4d54-e519-4e1e-9e4b-3e2881312d55-kube-api-access-j4989\") pod \"elastic-operator-796f77fbdf-t9sbr\" (UID: \"164c4d54-e519-4e1e-9e4b-3e2881312d55\") " pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.165463 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.421271 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:41 crc kubenswrapper[5120]: I0122 12:00:41.905093 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zkkb7" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" containerID="cri-o://d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315" gracePeriod=2 Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.271639 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-sd4wv"] Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.537770 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-sd4wv"] Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.538107 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.542236 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-jwlzv\"" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.706082 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjwbb\" (UniqueName: \"kubernetes.io/projected/b6e8a299-2880-4236-8f8b-b6983db7ed96-kube-api-access-zjwbb\") pod \"interconnect-operator-78b9bd8798-sd4wv\" (UID: \"b6e8a299-2880-4236-8f8b-b6983db7ed96\") " pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.809853 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zjwbb\" (UniqueName: \"kubernetes.io/projected/b6e8a299-2880-4236-8f8b-b6983db7ed96-kube-api-access-zjwbb\") pod \"interconnect-operator-78b9bd8798-sd4wv\" (UID: \"b6e8a299-2880-4236-8f8b-b6983db7ed96\") " pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.844006 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjwbb\" (UniqueName: \"kubernetes.io/projected/b6e8a299-2880-4236-8f8b-b6983db7ed96-kube-api-access-zjwbb\") pod \"interconnect-operator-78b9bd8798-sd4wv\" (UID: \"b6e8a299-2880-4236-8f8b-b6983db7ed96\") " pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.932310 5120 generic.go:358] "Generic (PLEG): container finished" podID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerID="d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315" exitCode=0 Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.933008 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315"} Jan 22 12:00:42 crc kubenswrapper[5120]: I0122 12:00:42.934478 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.835795 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.880357 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.880642 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.957303 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.957383 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:44 crc kubenswrapper[5120]: I0122 12:00:44.957462 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.058989 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.059125 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.059162 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.059740 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.060033 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.102993 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"community-operators-dxmrl\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.204386 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.386683 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.463826 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") pod \"8b5a6248-a718-4c8c-b2d8-26c979672691\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.463996 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") pod \"8b5a6248-a718-4c8c-b2d8-26c979672691\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.464308 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") pod \"8b5a6248-a718-4c8c-b2d8-26c979672691\" (UID: \"8b5a6248-a718-4c8c-b2d8-26c979672691\") " Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.470921 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities" (OuterVolumeSpecName: "utilities") pod "8b5a6248-a718-4c8c-b2d8-26c979672691" (UID: "8b5a6248-a718-4c8c-b2d8-26c979672691"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.502704 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2" (OuterVolumeSpecName: "kube-api-access-4wpw2") pod "8b5a6248-a718-4c8c-b2d8-26c979672691" (UID: "8b5a6248-a718-4c8c-b2d8-26c979672691"). InnerVolumeSpecName "kube-api-access-4wpw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.516337 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b5a6248-a718-4c8c-b2d8-26c979672691" (UID: "8b5a6248-a718-4c8c-b2d8-26c979672691"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.566332 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wpw2\" (UniqueName: \"kubernetes.io/projected/8b5a6248-a718-4c8c-b2d8-26c979672691-kube-api-access-4wpw2\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.566381 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.566392 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b5a6248-a718-4c8c-b2d8-26c979672691-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.982348 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zkkb7" event={"ID":"8b5a6248-a718-4c8c-b2d8-26c979672691","Type":"ContainerDied","Data":"265be79110a72ebba1156eec2a58e1e49b4bd06b96371e08bd346f68e3921b3b"} Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.982407 5120 scope.go:117] "RemoveContainer" containerID="d53b338867d5bdf729dd622fdd10987f206d0753cdfe93d718d86426f2aed315" Jan 22 12:00:45 crc kubenswrapper[5120]: I0122 12:00:45.982572 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zkkb7" Jan 22 12:00:46 crc kubenswrapper[5120]: I0122 12:00:46.003901 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:46 crc kubenswrapper[5120]: I0122 12:00:46.020807 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zkkb7"] Jan 22 12:00:47 crc kubenswrapper[5120]: I0122 12:00:47.580012 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" path="/var/lib/kubelet/pods/8b5a6248-a718-4c8c-b2d8-26c979672691/volumes" Jan 22 12:00:52 crc kubenswrapper[5120]: I0122 12:00:52.923068 5120 scope.go:117] "RemoveContainer" containerID="e8419fe5302c5032adf86949d7fc07ce99ef94c635247658e099ceb729e4276a" Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.023711 5120 scope.go:117] "RemoveContainer" containerID="04e24cb4471e14d51fe8e02cf81f81f2adb50f52b16ddc7ba687333846cda4bb" Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.097530 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-sd4wv"] Jan 22 12:00:53 crc kubenswrapper[5120]: W0122 12:00:53.116847 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6e8a299_2880_4236_8f8b_b6983db7ed96.slice/crio-e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022 WatchSource:0}: Error finding container e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022: Status 404 returned error can't find the container with id e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022 Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.503010 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-796f77fbdf-t9sbr"] Jan 22 12:00:53 crc kubenswrapper[5120]: I0122 12:00:53.612719 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:00:53 crc kubenswrapper[5120]: W0122 12:00:53.617372 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb084ddd_669f_4358_a97d_4f3a5ba9fae7.slice/crio-2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6 WatchSource:0}: Error finding container 2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6: Status 404 returned error can't find the container with id 2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6 Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.104190 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" event={"ID":"b6e8a299-2880-4236-8f8b-b6983db7ed96","Type":"ContainerStarted","Data":"e709f35e6f7f77fb90cf5c5fd2e2a47179c48bb5eec94b9b7dccf8754f9af022"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.109304 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.109360 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.111407 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-s6759" event={"ID":"da59fdd4-fe7a-4efd-b136-79a9b05d38b8","Type":"ContainerStarted","Data":"7b081cd748b64d4412cf433484aca345a3dc58b87ac614237dbf16e41e6470e6"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.111734 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.113639 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" event={"ID":"da376ee2-11ae-493e-9e4d-d8ac6fadfb53","Type":"ContainerStarted","Data":"5011c740aeeeefd3f87c0b199bac4428287b673354d19f67d55c9b38162fdbc7"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.113881 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.116234 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" event={"ID":"6924228f-579c-408a-8a40-b103b066446d","Type":"ContainerStarted","Data":"374f57e2a88009eca867315ec61b154a67e2811b189b1a9c604b3feae64609cf"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.117366 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" event={"ID":"164c4d54-e519-4e1e-9e4b-3e2881312d55","Type":"ContainerStarted","Data":"ed92b3deb2dd7e7fd2d55fc582e2b90d346b992cbbaf81bb7daae2cbbd1ad89f"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.122376 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerStarted","Data":"802a4633acd59a79744b7cd3b94900cae00c9264f92f5f9efd8117e4aad8494e"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.124446 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" event={"ID":"6f74f225-731c-48b9-a98d-36a191b5ff41","Type":"ContainerStarted","Data":"7cc4f4cec7e980219e8b3d1caa52c63506622afefcda6781783f435ab9466227"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.126898 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" event={"ID":"2e68b911-b2b1-4a04-a86f-91742f22bad9","Type":"ContainerStarted","Data":"980f5f6379f90c53764fe3ebd0806b348f27c07a0537933e88050bd05e0c2dd4"} Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.138633 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-s6759" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.156935 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kjb4b" podStartSLOduration=2.993501276 podStartE2EDuration="20.156907806s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.874991726 +0000 UTC m=+770.618940067" lastFinishedPulling="2026-01-22 12:00:53.038398256 +0000 UTC m=+787.782346597" observedRunningTime="2026-01-22 12:00:54.151331349 +0000 UTC m=+788.895279690" watchObservedRunningTime="2026-01-22 12:00:54.156907806 +0000 UTC m=+788.900856147" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.173562 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb" podStartSLOduration=2.708230173 podStartE2EDuration="20.173535333s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.573845004 +0000 UTC m=+770.317793345" lastFinishedPulling="2026-01-22 12:00:53.039150164 +0000 UTC m=+787.783098505" observedRunningTime="2026-01-22 12:00:54.169010253 +0000 UTC m=+788.912958604" watchObservedRunningTime="2026-01-22 12:00:54.173535333 +0000 UTC m=+788.917483674" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.198701 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7" podStartSLOduration=2.736606854 podStartE2EDuration="20.19868372s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.481155647 +0000 UTC m=+770.225103988" lastFinishedPulling="2026-01-22 12:00:52.943232513 +0000 UTC m=+787.687180854" observedRunningTime="2026-01-22 12:00:54.195553874 +0000 UTC m=+788.939502225" watchObservedRunningTime="2026-01-22 12:00:54.19868372 +0000 UTC m=+788.942632061" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.224211 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" podStartSLOduration=2.188042802 podStartE2EDuration="19.224186585s" podCreationTimestamp="2026-01-22 12:00:35 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.959865383 +0000 UTC m=+770.703813724" lastFinishedPulling="2026-01-22 12:00:52.996009166 +0000 UTC m=+787.739957507" observedRunningTime="2026-01-22 12:00:54.221746126 +0000 UTC m=+788.965694487" watchObservedRunningTime="2026-01-22 12:00:54.224186585 +0000 UTC m=+788.968134926" Jan 22 12:00:54 crc kubenswrapper[5120]: I0122 12:00:54.280756 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-s6759" podStartSLOduration=3.220699738 podStartE2EDuration="20.280731162s" podCreationTimestamp="2026-01-22 12:00:34 +0000 UTC" firstStartedPulling="2026-01-22 12:00:35.87838062 +0000 UTC m=+770.622328961" lastFinishedPulling="2026-01-22 12:00:52.938412044 +0000 UTC m=+787.682360385" observedRunningTime="2026-01-22 12:00:54.250551702 +0000 UTC m=+788.994500043" watchObservedRunningTime="2026-01-22 12:00:54.280731162 +0000 UTC m=+789.024679503" Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.138828 5120 generic.go:358] "Generic (PLEG): container finished" podID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerID="802a4633acd59a79744b7cd3b94900cae00c9264f92f5f9efd8117e4aad8494e" exitCode=0 Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.138907 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"802a4633acd59a79744b7cd3b94900cae00c9264f92f5f9efd8117e4aad8494e"} Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.141405 5120 generic.go:358] "Generic (PLEG): container finished" podID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerID="e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7" exitCode=0 Jan 22 12:00:55 crc kubenswrapper[5120]: I0122 12:00:55.141506 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7"} Jan 22 12:00:57 crc kubenswrapper[5120]: I0122 12:00:57.165685 5120 generic.go:358] "Generic (PLEG): container finished" podID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerID="4f72ccb1642fc514b9735baafbda633ffe9225e4363cd42b5b789071633690a3" exitCode=0 Jan 22 12:00:57 crc kubenswrapper[5120]: I0122 12:00:57.166155 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"4f72ccb1642fc514b9735baafbda633ffe9225e4363cd42b5b789071633690a3"} Jan 22 12:00:57 crc kubenswrapper[5120]: I0122 12:00:57.168662 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77"} Jan 22 12:00:58 crc kubenswrapper[5120]: I0122 12:00:58.177453 5120 generic.go:358] "Generic (PLEG): container finished" podID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerID="176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77" exitCode=0 Jan 22 12:00:58 crc kubenswrapper[5120]: I0122 12:00:58.177511 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77"} Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.872753 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.981744 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") pod \"5915ccea-14c1-48c1-8e09-9cc508bb150e\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.981795 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") pod \"5915ccea-14c1-48c1-8e09-9cc508bb150e\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.981822 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") pod \"5915ccea-14c1-48c1-8e09-9cc508bb150e\" (UID: \"5915ccea-14c1-48c1-8e09-9cc508bb150e\") " Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.984821 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle" (OuterVolumeSpecName: "bundle") pod "5915ccea-14c1-48c1-8e09-9cc508bb150e" (UID: "5915ccea-14c1-48c1-8e09-9cc508bb150e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:00 crc kubenswrapper[5120]: I0122 12:01:00.990764 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util" (OuterVolumeSpecName: "util") pod "5915ccea-14c1-48c1-8e09-9cc508bb150e" (UID: "5915ccea-14c1-48c1-8e09-9cc508bb150e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.001902 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg" (OuterVolumeSpecName: "kube-api-access-bwkkg") pod "5915ccea-14c1-48c1-8e09-9cc508bb150e" (UID: "5915ccea-14c1-48c1-8e09-9cc508bb150e"). InnerVolumeSpecName "kube-api-access-bwkkg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.083137 5120 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.083166 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bwkkg\" (UniqueName: \"kubernetes.io/projected/5915ccea-14c1-48c1-8e09-9cc508bb150e-kube-api-access-bwkkg\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.083175 5120 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5915ccea-14c1-48c1-8e09-9cc508bb150e-util\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.200356 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.200465 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz" event={"ID":"5915ccea-14c1-48c1-8e09-9cc508bb150e","Type":"ContainerDied","Data":"0acb727f11a4fa06c57cb8d1ffde7d59f3b3547f9a2d5b94ff706f6704b9f81a"} Jan 22 12:01:01 crc kubenswrapper[5120]: I0122 12:01:01.200521 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acb727f11a4fa06c57cb8d1ffde7d59f3b3547f9a2d5b94ff706f6704b9f81a" Jan 22 12:01:05 crc kubenswrapper[5120]: I0122 12:01:05.148253 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-n9lhg" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.647312 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62"] Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648222 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-utilities" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648238 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-utilities" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648249 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-content" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648256 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="extract-content" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648286 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="pull" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648292 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="pull" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648300 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648306 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648331 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="util" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648336 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="util" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648345 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="extract" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648352 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="extract" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648457 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="8b5a6248-a718-4c8c-b2d8-26c979672691" containerName="registry-server" Jan 22 12:01:10 crc kubenswrapper[5120]: I0122 12:01:10.648470 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5915ccea-14c1-48c1-8e09-9cc508bb150e" containerName="extract" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.208891 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.211895 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-mrl56\"" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.212332 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.212846 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.217848 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62"] Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.245248 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3360ac52-3ac8-4f21-9f80-e225b93f2056-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.245315 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn99d\" (UniqueName: \"kubernetes.io/projected/3360ac52-3ac8-4f21-9f80-e225b93f2056-kube-api-access-bn99d\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.347134 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bn99d\" (UniqueName: \"kubernetes.io/projected/3360ac52-3ac8-4f21-9f80-e225b93f2056-kube-api-access-bn99d\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.347244 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3360ac52-3ac8-4f21-9f80-e225b93f2056-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.348199 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3360ac52-3ac8-4f21-9f80-e225b93f2056-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.370365 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn99d\" (UniqueName: \"kubernetes.io/projected/3360ac52-3ac8-4f21-9f80-e225b93f2056-kube-api-access-bn99d\") pod \"cert-manager-operator-controller-manager-64c74584c4-fsh62\" (UID: \"3360ac52-3ac8-4f21-9f80-e225b93f2056\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:12 crc kubenswrapper[5120]: I0122 12:01:12.568574 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" Jan 22 12:01:15 crc kubenswrapper[5120]: W0122 12:01:15.177023 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3360ac52_3ac8_4f21_9f80_e225b93f2056.slice/crio-d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0 WatchSource:0}: Error finding container d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0: Status 404 returned error can't find the container with id d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0 Jan 22 12:01:15 crc kubenswrapper[5120]: I0122 12:01:15.183216 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62"] Jan 22 12:01:15 crc kubenswrapper[5120]: I0122 12:01:15.315721 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" event={"ID":"3360ac52-3ac8-4f21-9f80-e225b93f2056","Type":"ContainerStarted","Data":"d80a50eabd170111200959024aae438c3f8a7ec38a34feb510c8ffd1d1be8da0"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.323316 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" event={"ID":"b6e8a299-2880-4236-8f8b-b6983db7ed96","Type":"ContainerStarted","Data":"8743b47d2c9fb36616db41b8cf4a3ae9d3b694267453758a4a96e39424ada641"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.327187 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerStarted","Data":"76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.330680 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" event={"ID":"164c4d54-e519-4e1e-9e4b-3e2881312d55","Type":"ContainerStarted","Data":"ec197ea488202eca7ed71560b2f91de6854e07c86caeadcb8ac6716ba236310b"} Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.348516 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-sd4wv" podStartSLOduration=12.690898919 podStartE2EDuration="34.348497481s" podCreationTimestamp="2026-01-22 12:00:42 +0000 UTC" firstStartedPulling="2026-01-22 12:00:53.119021132 +0000 UTC m=+787.862969473" lastFinishedPulling="2026-01-22 12:01:14.776619694 +0000 UTC m=+809.520568035" observedRunningTime="2026-01-22 12:01:16.341009277 +0000 UTC m=+811.084957618" watchObservedRunningTime="2026-01-22 12:01:16.348497481 +0000 UTC m=+811.092445822" Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.377772 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dxmrl" podStartSLOduration=29.878599249 podStartE2EDuration="32.377755158s" podCreationTimestamp="2026-01-22 12:00:44 +0000 UTC" firstStartedPulling="2026-01-22 12:00:54.110438687 +0000 UTC m=+788.854387028" lastFinishedPulling="2026-01-22 12:00:56.609594596 +0000 UTC m=+791.353542937" observedRunningTime="2026-01-22 12:01:16.373708349 +0000 UTC m=+811.117656710" watchObservedRunningTime="2026-01-22 12:01:16.377755158 +0000 UTC m=+811.121703499" Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.408487 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-796f77fbdf-t9sbr" podStartSLOduration=15.296854983 podStartE2EDuration="36.408462681s" podCreationTimestamp="2026-01-22 12:00:40 +0000 UTC" firstStartedPulling="2026-01-22 12:00:53.557097481 +0000 UTC m=+788.301045822" lastFinishedPulling="2026-01-22 12:01:14.668705179 +0000 UTC m=+809.412653520" observedRunningTime="2026-01-22 12:01:16.396823375 +0000 UTC m=+811.140771716" watchObservedRunningTime="2026-01-22 12:01:16.408462681 +0000 UTC m=+811.152411022" Jan 22 12:01:16 crc kubenswrapper[5120]: I0122 12:01:16.953140 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.023588 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.023839 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.031885 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032139 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032139 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032279 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032387 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-qbcgw\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032706 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032809 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.032909 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.033311 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115132 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115231 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115263 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115308 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115328 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115455 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115486 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115659 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115718 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115768 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115816 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115865 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115920 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.115940 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.116002 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.217589 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.217833 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.217943 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218111 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218156 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218201 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218220 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218155 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218368 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218493 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218537 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218552 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218626 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218681 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218683 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218764 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218790 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218880 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.218906 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.219437 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.220157 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.220174 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231023 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231075 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231039 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.231337 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.237258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.237302 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.240439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d6cd7adc-81ad-4b43-bd4c-7f48f1df35be-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.345028 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:17 crc kubenswrapper[5120]: I0122 12:01:17.998123 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:18 crc kubenswrapper[5120]: W0122 12:01:18.028539 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6cd7adc_81ad_4b43_bd4c_7f48f1df35be.slice/crio-b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff WatchSource:0}: Error finding container b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff: Status 404 returned error can't find the container with id b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff Jan 22 12:01:18 crc kubenswrapper[5120]: I0122 12:01:18.352096 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerStarted","Data":"b5cca3810f03844884b09910051d4888d0fe8e86f8b47c72bb681e4774a48bff"} Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.205439 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.205849 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.263949 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.397535 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" event={"ID":"3360ac52-3ac8-4f21-9f80-e225b93f2056","Type":"ContainerStarted","Data":"f7b341664d9852f50da8e3be5edc21dfff699eef29c77efb9573fd5602f37a87"} Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.424326 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-fsh62" podStartSLOduration=5.848324918 podStartE2EDuration="15.424308701s" podCreationTimestamp="2026-01-22 12:01:10 +0000 UTC" firstStartedPulling="2026-01-22 12:01:15.181108241 +0000 UTC m=+809.925056582" lastFinishedPulling="2026-01-22 12:01:24.757092034 +0000 UTC m=+819.501040365" observedRunningTime="2026-01-22 12:01:25.418924369 +0000 UTC m=+820.162872710" watchObservedRunningTime="2026-01-22 12:01:25.424308701 +0000 UTC m=+820.168257042" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.472085 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:25 crc kubenswrapper[5120]: I0122 12:01:25.520172 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:01:27 crc kubenswrapper[5120]: I0122 12:01:27.411207 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dxmrl" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" containerID="cri-o://76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" gracePeriod=2 Jan 22 12:01:28 crc kubenswrapper[5120]: I0122 12:01:28.421288 5120 generic.go:358] "Generic (PLEG): container finished" podID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" exitCode=0 Jan 22 12:01:28 crc kubenswrapper[5120]: I0122 12:01:28.421338 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623"} Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.355736 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-r299r"] Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.704312 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.709076 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.709149 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.708997 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-lqldl\"" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.719753 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-r299r"] Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.870765 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khmn\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-kube-api-access-9khmn\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.870848 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.976073 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9khmn\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-kube-api-access-9khmn\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:29 crc kubenswrapper[5120]: I0122 12:01:29.976512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.001093 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.018060 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9khmn\" (UniqueName: \"kubernetes.io/projected/fab5bde7-2cb3-4840-955e-6eec20d29b5d-kube-api-access-9khmn\") pod \"cert-manager-webhook-7894b5b9b4-r299r\" (UID: \"fab5bde7-2cb3-4840-955e-6eec20d29b5d\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.027034 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.492144 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc"] Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.555581 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc"] Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.556130 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.560525 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-tph25\"" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.585453 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.585555 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvv8\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-kube-api-access-rrvv8\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.686323 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvv8\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-kube-api-access-rrvv8\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.686444 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.709315 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.709880 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvv8\" (UniqueName: \"kubernetes.io/projected/abe35b4f-1ae8-4e82-8b22-5f2d8fe01445-kube-api-access-rrvv8\") pod \"cert-manager-cainjector-7dbf76d5c8-qc2vc\" (UID: \"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:30 crc kubenswrapper[5120]: I0122 12:01:30.883216 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.563292 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.571803 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575044 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575262 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575423 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.575616 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.581190 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640780 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640830 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640863 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640898 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.640964 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641212 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641262 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641439 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641500 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641539 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641698 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.641727 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743096 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743174 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743223 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743287 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743315 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743348 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743375 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743405 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743443 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743466 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743493 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.743532 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744123 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744313 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744425 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744523 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744553 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744840 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.744849 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.745173 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.745298 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.749263 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.750492 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.762443 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"service-telemetry-operator-1-build\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:32 crc kubenswrapper[5120]: I0122 12:01:32.900950 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399196 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399498 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399838 5120 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 12:01:35 crc kubenswrapper[5120]: E0122 12:01:35.399874 5120 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-dxmrl" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" probeResult="unknown" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.723510 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.837204 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") pod \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.837427 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") pod \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.837546 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") pod \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\" (UID: \"cb084ddd-669f-4358-a97d-4f3a5ba9fae7\") " Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.838659 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities" (OuterVolumeSpecName: "utilities") pod "cb084ddd-669f-4358-a97d-4f3a5ba9fae7" (UID: "cb084ddd-669f-4358-a97d-4f3a5ba9fae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.848378 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq" (OuterVolumeSpecName: "kube-api-access-xvzbq") pod "cb084ddd-669f-4358-a97d-4f3a5ba9fae7" (UID: "cb084ddd-669f-4358-a97d-4f3a5ba9fae7"). InnerVolumeSpecName "kube-api-access-xvzbq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.916477 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb084ddd-669f-4358-a97d-4f3a5ba9fae7" (UID: "cb084ddd-669f-4358-a97d-4f3a5ba9fae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.940535 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xvzbq\" (UniqueName: \"kubernetes.io/projected/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-kube-api-access-xvzbq\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.941068 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:38 crc kubenswrapper[5120]: I0122 12:01:38.941085 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb084ddd-669f-4358-a97d-4f3a5ba9fae7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.184009 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-r299r"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.258765 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.293375 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc"] Jan 22 12:01:39 crc kubenswrapper[5120]: W0122 12:01:39.350013 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfab5bde7_2cb3_4840_955e_6eec20d29b5d.slice/crio-850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d WatchSource:0}: Error finding container 850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d: Status 404 returned error can't find the container with id 850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d Jan 22 12:01:39 crc kubenswrapper[5120]: W0122 12:01:39.353228 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe35b4f_1ae8_4e82_8b22_5f2d8fe01445.slice/crio-44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f WatchSource:0}: Error finding container 44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f: Status 404 returned error can't find the container with id 44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.501888 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dxmrl" event={"ID":"cb084ddd-669f-4358-a97d-4f3a5ba9fae7","Type":"ContainerDied","Data":"2a31d739d9fbee1fe8e474a9523e8cd0a20910258f9d65e23ea080591bc7c2a6"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.501917 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dxmrl" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.501967 5120 scope.go:117] "RemoveContainer" containerID="76892c612c247112fc7609c48e5bc95f7a9684d449c15a57300915c4087b6623" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.503478 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" event={"ID":"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445","Type":"ContainerStarted","Data":"44c96f977889e9c5be77ea1116b0f83671bf498d0015e9641d891d612d23ec7f"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.505268 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" event={"ID":"fab5bde7-2cb3-4840-955e-6eec20d29b5d","Type":"ContainerStarted","Data":"850f5542fbbfebfa7e09ffa77a0e28f8662c633d8d7fcd44b3f68974cb19e58d"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.506848 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerStarted","Data":"276b68da221543bdcc6460461785ccf95994beb49cd06591cb2eb132c13d5c0f"} Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.525708 5120 scope.go:117] "RemoveContainer" containerID="176d5c6da4697db412b127de755e4488fee55bf7587afebcbe759912236afe77" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.540804 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.546897 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dxmrl"] Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.556502 5120 scope.go:117] "RemoveContainer" containerID="e41051f49340ebf37bce642806a3eeef2940a39175cea5236a923352e9d285d7" Jan 22 12:01:39 crc kubenswrapper[5120]: I0122 12:01:39.579269 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" path="/var/lib/kubelet/pods/cb084ddd-669f-4358-a97d-4f3a5ba9fae7/volumes" Jan 22 12:01:41 crc kubenswrapper[5120]: I0122 12:01:41.530610 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerStarted","Data":"3e254b72990295cbc311f335cedd63207da051dc4de52fa375c53f3b096ee27a"} Jan 22 12:01:41 crc kubenswrapper[5120]: I0122 12:01:41.636937 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:41 crc kubenswrapper[5120]: I0122 12:01:41.667701 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 22 12:01:42 crc kubenswrapper[5120]: I0122 12:01:42.947239 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:01:43 crc kubenswrapper[5120]: I0122 12:01:43.548743 5120 generic.go:358] "Generic (PLEG): container finished" podID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerID="3e254b72990295cbc311f335cedd63207da051dc4de52fa375c53f3b096ee27a" exitCode=0 Jan 22 12:01:43 crc kubenswrapper[5120]: I0122 12:01:43.548815 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerDied","Data":"3e254b72990295cbc311f335cedd63207da051dc4de52fa375c53f3b096ee27a"} Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.603788 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605097 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-utilities" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605116 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-utilities" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605138 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605146 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605167 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-content" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605172 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="extract-content" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.605275 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb084ddd-669f-4358-a97d-4f3a5ba9fae7" containerName="registry-server" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.645585 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.645768 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.651379 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.651607 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.652718 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742149 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742211 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742399 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742519 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742576 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742725 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742780 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742855 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742919 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.742987 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.743028 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.743060 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844521 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844573 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844604 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844635 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844662 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.844853 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845121 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845181 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845266 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845516 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845545 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845320 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845599 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845646 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845666 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.845690 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.846057 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.846379 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.846803 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.873858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.873858 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.877211 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"service-telemetry-operator-2-build\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:44 crc kubenswrapper[5120]: I0122 12:01:44.969480 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:01:47 crc kubenswrapper[5120]: I0122 12:01:47.663981 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-n6l95"] Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.026822 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-n6l95"] Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.026940 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.029554 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-hsq9f\"" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.099247 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsb7w\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-kube-api-access-xsb7w\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.099299 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-bound-sa-token\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.201181 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xsb7w\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-kube-api-access-xsb7w\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.201242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-bound-sa-token\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.229055 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsb7w\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-kube-api-access-xsb7w\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.229248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56c64e8f-cd1a-468a-a526-ed7c1ff5ac88-bound-sa-token\") pod \"cert-manager-858d87f86b-n6l95\" (UID: \"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88\") " pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:48 crc kubenswrapper[5120]: I0122 12:01:48.348266 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-n6l95" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.609092 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-n6l95"] Jan 22 12:01:55 crc kubenswrapper[5120]: W0122 12:01:55.640915 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56c64e8f_cd1a_468a_a526_ed7c1ff5ac88.slice/crio-7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc WatchSource:0}: Error finding container 7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc: Status 404 returned error can't find the container with id 7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.652007 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" event={"ID":"abe35b4f-1ae8-4e82-8b22-5f2d8fe01445","Type":"ContainerStarted","Data":"50df14d0ca5a1ffda8da164da20a28b3c793f4246e15a287ebd53ec059380bea"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.654001 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" event={"ID":"fab5bde7-2cb3-4840-955e-6eec20d29b5d","Type":"ContainerStarted","Data":"cf4ac3fe13147c75b2c89468e3f61177e52092c7cf46342f1cb1806fc5d4a4e3"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.657233 5120 generic.go:358] "Generic (PLEG): container finished" podID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerID="b1569baafafbf7d0356bb08e52c1248e97ff42739c703c4fefa538f3ca6039d0" exitCode=0 Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.657295 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerDied","Data":"b1569baafafbf7d0356bb08e52c1248e97ff42739c703c4fefa538f3ca6039d0"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.659762 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-n6l95" event={"ID":"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88","Type":"ContainerStarted","Data":"7c280c2b2ad968b512f9dae71a2c587967e2269c571becfc23c871d50149cbfc"} Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.774908 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.822773 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-qc2vc" podStartSLOduration=10.717309491 podStartE2EDuration="25.822746758s" podCreationTimestamp="2026-01-22 12:01:30 +0000 UTC" firstStartedPulling="2026-01-22 12:01:39.35570543 +0000 UTC m=+834.099653771" lastFinishedPulling="2026-01-22 12:01:54.461142697 +0000 UTC m=+849.205091038" observedRunningTime="2026-01-22 12:01:55.801402981 +0000 UTC m=+850.545351312" watchObservedRunningTime="2026-01-22 12:01:55.822746758 +0000 UTC m=+850.566695109" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.852261 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" podStartSLOduration=11.696008335 podStartE2EDuration="26.852236824s" podCreationTimestamp="2026-01-22 12:01:29 +0000 UTC" firstStartedPulling="2026-01-22 12:01:39.355672589 +0000 UTC m=+834.099620930" lastFinishedPulling="2026-01-22 12:01:54.511901078 +0000 UTC m=+849.255849419" observedRunningTime="2026-01-22 12:01:55.834022772 +0000 UTC m=+850.577971123" watchObservedRunningTime="2026-01-22 12:01:55.852236824 +0000 UTC m=+850.596185165" Jan 22 12:01:55 crc kubenswrapper[5120]: I0122 12:01:55.861063 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.667263 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d6cd7adc-81ad-4b43-bd4c-7f48f1df35be","Type":"ContainerStarted","Data":"7b9979d0e55a1604640eb70e33f26342ecd95b76bfcb410ec6c253bc9cdf96bd"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.668804 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.669916 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-n6l95" event={"ID":"56c64e8f-cd1a-468a-a526-ed7c1ff5ac88","Type":"ContainerStarted","Data":"77d6d2cabaaf5f9aa0d772513f2080a81bfca5d63a5dfbae28a27567093f67bb"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.671247 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerStarted","Data":"2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.671372 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" containerID="cri-o://2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98" gracePeriod=30 Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.677864 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerStarted","Data":"dada5f19c5248fac72087635da2dd9d46ccc13893f466778a942313931d53dca"} Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.922807 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=17.830049339 podStartE2EDuration="40.922791845s" podCreationTimestamp="2026-01-22 12:01:16 +0000 UTC" firstStartedPulling="2026-01-22 12:01:18.031786658 +0000 UTC m=+812.775734999" lastFinishedPulling="2026-01-22 12:01:41.124529164 +0000 UTC m=+835.868477505" observedRunningTime="2026-01-22 12:01:56.917694361 +0000 UTC m=+851.661642712" watchObservedRunningTime="2026-01-22 12:01:56.922791845 +0000 UTC m=+851.666740186" Jan 22 12:01:56 crc kubenswrapper[5120]: I0122 12:01:56.952985 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-n6l95" podStartSLOduration=9.952944636 podStartE2EDuration="9.952944636s" podCreationTimestamp="2026-01-22 12:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:01:56.949719478 +0000 UTC m=+851.693667819" watchObservedRunningTime="2026-01-22 12:01:56.952944636 +0000 UTC m=+851.696892977" Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.687427 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a5c6b382-0699-4ddd-9be8-7031369555a5/manage-dockerfile/0.log" Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.687767 5120 generic.go:358] "Generic (PLEG): container finished" podID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerID="2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98" exitCode=1 Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.687937 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerDied","Data":"2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98"} Jan 22 12:01:57 crc kubenswrapper[5120]: I0122 12:01:57.691998 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerStarted","Data":"41ce57f52d3737dd8a69946b5e7f98895d2d4314b12d163260db9fed3e9beb41"} Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.072336 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a5c6b382-0699-4ddd-9be8-7031369555a5/manage-dockerfile/0.log" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.072919 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144376 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144833 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144892 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144915 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144948 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.144988 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145039 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145128 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145153 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145203 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145232 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145285 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") pod \"a5c6b382-0699-4ddd-9be8-7031369555a5\" (UID: \"a5c6b382-0699-4ddd-9be8-7031369555a5\") " Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.145921 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146023 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146159 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146437 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146597 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146797 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.146930 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.147282 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.147627 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.150598 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.151359 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.151385 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.151550 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" containerName="manage-dockerfile" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.167123 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.167501 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.172541 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.172973 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.173737 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.274284 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6" (OuterVolumeSpecName: "kube-api-access-r2pw6") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "kube-api-access-r2pw6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.274336 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.274862 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "a5c6b382-0699-4ddd-9be8-7031369555a5" (UID: "a5c6b382-0699-4ddd-9be8-7031369555a5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275162 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275187 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275202 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275217 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275229 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275240 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275252 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5c6b382-0699-4ddd-9be8-7031369555a5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275264 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275276 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a5c6b382-0699-4ddd-9be8-7031369555a5-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275288 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5c6b382-0699-4ddd-9be8-7031369555a5-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275299 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5c6b382-0699-4ddd-9be8-7031369555a5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.275310 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r2pw6\" (UniqueName: \"kubernetes.io/projected/a5c6b382-0699-4ddd-9be8-7031369555a5-kube-api-access-r2pw6\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.376490 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"auto-csr-approver-29484722-4kg69\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.563224 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"auto-csr-approver-29484722-4kg69\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.589395 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"auto-csr-approver-29484722-4kg69\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.724801 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_a5c6b382-0699-4ddd-9be8-7031369555a5/manage-dockerfile/0.log" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.725574 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"a5c6b382-0699-4ddd-9be8-7031369555a5","Type":"ContainerDied","Data":"276b68da221543bdcc6460461785ccf95994beb49cd06591cb2eb132c13d5c0f"} Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.725760 5120 scope.go:117] "RemoveContainer" containerID="2a5668b145354eff00e67756d9eae7ae83b1323206c24b6a9b57514b0ef3fe98" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.726013 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.766755 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.776321 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 22 12:02:00 crc kubenswrapper[5120]: I0122 12:02:00.812131 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.589422 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5c6b382-0699-4ddd-9be8-7031369555a5" path="/var/lib/kubelet/pods/a5c6b382-0699-4ddd-9be8-7031369555a5/volumes" Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.590765 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.683266 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-r299r" Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.735033 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484722-4kg69" event={"ID":"724f8cf0-a6c6-45cf-932a-0bdc0247b38f","Type":"ContainerStarted","Data":"7b9e14e415736717033fd90f20e8bdea167cb5ebe3d10611764d2aa5e78197b9"} Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.972345 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:02:01 crc kubenswrapper[5120]: I0122 12:02:01.972443 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.772278 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerName="elasticsearch" probeResult="failure" output=< Jan 22 12:02:08 crc kubenswrapper[5120]: {"timestamp": "2026-01-22T12:02:08+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 22 12:02:08 crc kubenswrapper[5120]: > Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.801022 5120 generic.go:358] "Generic (PLEG): container finished" podID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerID="ceb1fb8314d94f06df7d317cf94cdc9dbae9c56f894e19873a0c9d4b5ac76d19" exitCode=0 Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.801185 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484722-4kg69" event={"ID":"724f8cf0-a6c6-45cf-932a-0bdc0247b38f","Type":"ContainerDied","Data":"ceb1fb8314d94f06df7d317cf94cdc9dbae9c56f894e19873a0c9d4b5ac76d19"} Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.803515 5120 generic.go:358] "Generic (PLEG): container finished" podID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerID="41ce57f52d3737dd8a69946b5e7f98895d2d4314b12d163260db9fed3e9beb41" exitCode=0 Jan 22 12:02:08 crc kubenswrapper[5120]: I0122 12:02:08.803616 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"41ce57f52d3737dd8a69946b5e7f98895d2d4314b12d163260db9fed3e9beb41"} Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.119310 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.242297 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") pod \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\" (UID: \"724f8cf0-a6c6-45cf-932a-0bdc0247b38f\") " Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.253017 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4" (OuterVolumeSpecName: "kube-api-access-xmfp4") pod "724f8cf0-a6c6-45cf-932a-0bdc0247b38f" (UID: "724f8cf0-a6c6-45cf-932a-0bdc0247b38f"). InnerVolumeSpecName "kube-api-access-xmfp4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.343786 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmfp4\" (UniqueName: \"kubernetes.io/projected/724f8cf0-a6c6-45cf-932a-0bdc0247b38f-kube-api-access-xmfp4\") on node \"crc\" DevicePath \"\"" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.816780 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484722-4kg69" Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.816800 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484722-4kg69" event={"ID":"724f8cf0-a6c6-45cf-932a-0bdc0247b38f","Type":"ContainerDied","Data":"7b9e14e415736717033fd90f20e8bdea167cb5ebe3d10611764d2aa5e78197b9"} Jan 22 12:02:10 crc kubenswrapper[5120]: I0122 12:02:10.816834 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9e14e415736717033fd90f20e8bdea167cb5ebe3d10611764d2aa5e78197b9" Jan 22 12:02:11 crc kubenswrapper[5120]: I0122 12:02:11.195879 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 12:02:11 crc kubenswrapper[5120]: I0122 12:02:11.202399 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484716-phf4d"] Jan 22 12:02:11 crc kubenswrapper[5120]: I0122 12:02:11.582223 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a45690da-bfac-4359-88d2-e604fb44508e" path="/var/lib/kubelet/pods/a45690da-bfac-4359-88d2-e604fb44508e/volumes" Jan 22 12:02:12 crc kubenswrapper[5120]: I0122 12:02:12.831927 5120 generic.go:358] "Generic (PLEG): container finished" podID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerID="7eee3f73c044c06a41ca4676e52bbdefc0678bf251415bcfb5e7731f4c73e941" exitCode=0 Jan 22 12:02:12 crc kubenswrapper[5120]: I0122 12:02:12.832015 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"7eee3f73c044c06a41ca4676e52bbdefc0678bf251415bcfb5e7731f4c73e941"} Jan 22 12:02:12 crc kubenswrapper[5120]: I0122 12:02:12.878936 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/manage-dockerfile/0.log" Jan 22 12:02:13 crc kubenswrapper[5120]: I0122 12:02:13.783048 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerName="elasticsearch" probeResult="failure" output=< Jan 22 12:02:13 crc kubenswrapper[5120]: {"timestamp": "2026-01-22T12:02:13+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 22 12:02:13 crc kubenswrapper[5120]: > Jan 22 12:02:15 crc kubenswrapper[5120]: I0122 12:02:15.862341 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerStarted","Data":"a36f85d5fefa4980196ba8b9794328aa8a92dfc9eea7cd5f06b187392adb2de4"} Jan 22 12:02:15 crc kubenswrapper[5120]: I0122 12:02:15.903914 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=31.903876301 podStartE2EDuration="31.903876301s" podCreationTimestamp="2026-01-22 12:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:02:15.90173583 +0000 UTC m=+870.645684171" watchObservedRunningTime="2026-01-22 12:02:15.903876301 +0000 UTC m=+870.647824642" Jan 22 12:02:18 crc kubenswrapper[5120]: I0122 12:02:18.779310 5120 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d6cd7adc-81ad-4b43-bd4c-7f48f1df35be" containerName="elasticsearch" probeResult="failure" output=< Jan 22 12:02:18 crc kubenswrapper[5120]: {"timestamp": "2026-01-22T12:02:18+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 22 12:02:18 crc kubenswrapper[5120]: > Jan 22 12:02:24 crc kubenswrapper[5120]: I0122 12:02:24.440036 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 22 12:02:31 crc kubenswrapper[5120]: I0122 12:02:31.972549 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:02:32 crc kubenswrapper[5120]: I0122 12:02:31.973200 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.918671 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.920930 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.934610 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:02:45 crc kubenswrapper[5120]: I0122 12:02:45.935149 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:02:54 crc kubenswrapper[5120]: I0122 12:02:54.459152 5120 scope.go:117] "RemoveContainer" containerID="50058b8b91e5dd9329c621c05d95a98bf79e0360bf7ed78ecfbcba7624fecffa" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.972898 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.973666 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.973732 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.974453 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:03:01 crc kubenswrapper[5120]: I0122 12:03:01.974524 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a" gracePeriod=600 Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.109755 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.806397 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a" exitCode=0 Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.806473 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a"} Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.807338 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7"} Jan 22 12:03:02 crc kubenswrapper[5120]: I0122 12:03:02.807399 5120 scope.go:117] "RemoveContainer" containerID="bce4cc383007abddfe015e880c39e78b9257e350f68f93cf80d0801b94ef0ab7" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.139285 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.143235 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerName="oc" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.143267 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerName="oc" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.143522 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" containerName="oc" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.153833 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.154125 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.156420 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.157324 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.157625 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.255508 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"auto-csr-approver-29484724-5shbh\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.357147 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"auto-csr-approver-29484724-5shbh\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.383469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"auto-csr-approver-29484724-5shbh\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.532774 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:00 crc kubenswrapper[5120]: I0122 12:04:00.732436 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:04:01 crc kubenswrapper[5120]: I0122 12:04:01.333916 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484724-5shbh" event={"ID":"b86909ba-6fe2-4fdd-994d-e5014840c597","Type":"ContainerStarted","Data":"4e020c80487390fc4c17b7c2780c4095510efeee91951a12d81dcf3bda1051d0"} Jan 22 12:04:04 crc kubenswrapper[5120]: I0122 12:04:04.359813 5120 generic.go:358] "Generic (PLEG): container finished" podID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerID="ebc82e27b7ff9936fb8ab3baff996147f2e548280fc1e0007bc5efe24e9891e6" exitCode=0 Jan 22 12:04:04 crc kubenswrapper[5120]: I0122 12:04:04.359895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484724-5shbh" event={"ID":"b86909ba-6fe2-4fdd-994d-e5014840c597","Type":"ContainerDied","Data":"ebc82e27b7ff9936fb8ab3baff996147f2e548280fc1e0007bc5efe24e9891e6"} Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.642832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.737088 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") pod \"b86909ba-6fe2-4fdd-994d-e5014840c597\" (UID: \"b86909ba-6fe2-4fdd-994d-e5014840c597\") " Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.745642 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk" (OuterVolumeSpecName: "kube-api-access-t85sk") pod "b86909ba-6fe2-4fdd-994d-e5014840c597" (UID: "b86909ba-6fe2-4fdd-994d-e5014840c597"). InnerVolumeSpecName "kube-api-access-t85sk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:04:05 crc kubenswrapper[5120]: I0122 12:04:05.839608 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t85sk\" (UniqueName: \"kubernetes.io/projected/b86909ba-6fe2-4fdd-994d-e5014840c597-kube-api-access-t85sk\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.376747 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484724-5shbh" Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.376779 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484724-5shbh" event={"ID":"b86909ba-6fe2-4fdd-994d-e5014840c597","Type":"ContainerDied","Data":"4e020c80487390fc4c17b7c2780c4095510efeee91951a12d81dcf3bda1051d0"} Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.377452 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e020c80487390fc4c17b7c2780c4095510efeee91951a12d81dcf3bda1051d0" Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.738696 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 12:04:06 crc kubenswrapper[5120]: I0122 12:04:06.751172 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484718-tbtpd"] Jan 22 12:04:07 crc kubenswrapper[5120]: I0122 12:04:07.580500 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79a0076-aa90-4841-9865-b94aef438d2e" path="/var/lib/kubelet/pods/b79a0076-aa90-4841-9865-b94aef438d2e/volumes" Jan 22 12:04:16 crc kubenswrapper[5120]: I0122 12:04:16.453214 5120 generic.go:358] "Generic (PLEG): container finished" podID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerID="a36f85d5fefa4980196ba8b9794328aa8a92dfc9eea7cd5f06b187392adb2de4" exitCode=0 Jan 22 12:04:16 crc kubenswrapper[5120]: I0122 12:04:16.453302 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"a36f85d5fefa4980196ba8b9794328aa8a92dfc9eea7cd5f06b187392adb2de4"} Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.708768 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838534 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838588 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838643 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.838729 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839249 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839301 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839352 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839377 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839504 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839530 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839563 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") pod \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\" (UID: \"22ca9e65-c1f9-472a-8795-d6806d6bf7e0\") " Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839656 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.839772 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840160 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840180 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840195 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840308 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840612 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.840743 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.846696 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.846982 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.847223 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z" (OuterVolumeSpecName: "kube-api-access-7h22z") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "kube-api-access-7h22z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.872730 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942377 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942416 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7h22z\" (UniqueName: \"kubernetes.io/projected/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-kube-api-access-7h22z\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942426 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942436 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942444 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942456 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:17 crc kubenswrapper[5120]: I0122 12:04:17.942467 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.046198 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.145152 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.471589 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.471583 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"22ca9e65-c1f9-472a-8795-d6806d6bf7e0","Type":"ContainerDied","Data":"dada5f19c5248fac72087635da2dd9d46ccc13893f466778a942313931d53dca"} Jan 22 12:04:18 crc kubenswrapper[5120]: I0122 12:04:18.471636 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dada5f19c5248fac72087635da2dd9d46ccc13893f466778a942313931d53dca" Jan 22 12:04:20 crc kubenswrapper[5120]: I0122 12:04:20.729338 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "22ca9e65-c1f9-472a-8795-d6806d6bf7e0" (UID: "22ca9e65-c1f9-472a-8795-d6806d6bf7e0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:20 crc kubenswrapper[5120]: I0122 12:04:20.785638 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/22ca9e65-c1f9-472a-8795-d6806d6bf7e0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.315471 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316780 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="docker-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316819 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="docker-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316845 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerName="oc" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316853 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerName="oc" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316870 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="manage-dockerfile" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316878 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="manage-dockerfile" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316898 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="git-clone" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.316906 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="git-clone" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.317159 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="22ca9e65-c1f9-472a-8795-d6806d6bf7e0" containerName="docker-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.317181 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" containerName="oc" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.466601 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.466836 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.469560 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.470332 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.471670 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.474610 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613608 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613724 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613895 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.613941 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614088 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614216 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614347 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614471 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614520 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.614577 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.716534 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.716591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.716618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717390 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717410 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717478 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717496 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717596 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717681 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717759 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717780 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717797 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.717982 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718025 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718071 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718145 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718469 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718869 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.718939 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.719336 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.719702 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.725721 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.726171 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.739443 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"smart-gateway-operator-1-build\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:22 crc kubenswrapper[5120]: I0122 12:04:22.781997 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:23 crc kubenswrapper[5120]: I0122 12:04:23.017457 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:23 crc kubenswrapper[5120]: I0122 12:04:23.511301 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerStarted","Data":"def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150"} Jan 22 12:04:23 crc kubenswrapper[5120]: I0122 12:04:23.511704 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerStarted","Data":"3ade49d83aa5d35969943e2a6648e1a2bec8d3618c1ba25e134b6d8407a2b261"} Jan 22 12:04:24 crc kubenswrapper[5120]: I0122 12:04:24.522067 5120 generic.go:358] "Generic (PLEG): container finished" podID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerID="def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150" exitCode=0 Jan 22 12:04:24 crc kubenswrapper[5120]: I0122 12:04:24.522202 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerDied","Data":"def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150"} Jan 22 12:04:26 crc kubenswrapper[5120]: I0122 12:04:26.542166 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerStarted","Data":"12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0"} Jan 22 12:04:26 crc kubenswrapper[5120]: I0122 12:04:26.572127 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=4.572102668 podStartE2EDuration="4.572102668s" podCreationTimestamp="2026-01-22 12:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:04:26.571210972 +0000 UTC m=+1001.315159363" watchObservedRunningTime="2026-01-22 12:04:26.572102668 +0000 UTC m=+1001.316051009" Jan 22 12:04:32 crc kubenswrapper[5120]: I0122 12:04:32.900980 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:32 crc kubenswrapper[5120]: I0122 12:04:32.901976 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" containerID="cri-o://12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0" gracePeriod=30 Jan 22 12:04:34 crc kubenswrapper[5120]: I0122 12:04:34.889325 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.496710 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.499311 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.504386 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.504399 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.504600 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.516563 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517353 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517416 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517465 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517538 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517622 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517702 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517726 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517768 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.517797 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.519625 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.520844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.622689 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.622757 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623598 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623757 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623810 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.623910 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624044 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624068 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624102 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624155 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624199 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624297 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624327 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.624597 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.625286 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.625305 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.625745 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.626288 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.626875 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.627480 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.627568 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.629631 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.629947 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.635747 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_3528bca7-c1b4-485a-a9bd-240346daabf5/docker-build/0.log" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.636329 5120 generic.go:358] "Generic (PLEG): container finished" podID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerID="12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0" exitCode=1 Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.636450 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerDied","Data":"12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0"} Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.643828 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"smart-gateway-operator-2-build\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.795517 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_3528bca7-c1b4-485a-a9bd-240346daabf5/docker-build/0.log" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.796471 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831324 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831388 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831418 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831518 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831547 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831719 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831754 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831791 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831866 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831916 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.831975 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") pod \"3528bca7-c1b4-485a-a9bd-240346daabf5\" (UID: \"3528bca7-c1b4-485a-a9bd-240346daabf5\") " Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.832375 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.833581 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.833652 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.834256 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.835266 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.835756 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.836492 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.839797 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.840492 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.842061 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.842389 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk" (OuterVolumeSpecName: "kube-api-access-vgbhk") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "kube-api-access-vgbhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.843364 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.933981 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934012 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934023 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934033 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3528bca7-c1b4-485a-a9bd-240346daabf5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934057 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934067 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934075 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934084 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/3528bca7-c1b4-485a-a9bd-240346daabf5-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934094 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3528bca7-c1b4-485a-a9bd-240346daabf5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934102 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.934111 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vgbhk\" (UniqueName: \"kubernetes.io/projected/3528bca7-c1b4-485a-a9bd-240346daabf5-kube-api-access-vgbhk\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:37 crc kubenswrapper[5120]: I0122 12:04:37.990408 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3528bca7-c1b4-485a-a9bd-240346daabf5" (UID: "3528bca7-c1b4-485a-a9bd-240346daabf5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.035793 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3528bca7-c1b4-485a-a9bd-240346daabf5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.058967 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 22 12:04:38 crc kubenswrapper[5120]: W0122 12:04:38.065667 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod379c9b40_0f89_404c_ba85_6b98c4a35a4f.slice/crio-04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030 WatchSource:0}: Error finding container 04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030: Status 404 returned error can't find the container with id 04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030 Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.646552 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerStarted","Data":"5bbc946df07e08218832d593c225859e482f955978fd6e9a62ce7631704f808d"} Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.647040 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerStarted","Data":"04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030"} Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.648871 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_3528bca7-c1b4-485a-a9bd-240346daabf5/docker-build/0.log" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.649929 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"3528bca7-c1b4-485a-a9bd-240346daabf5","Type":"ContainerDied","Data":"3ade49d83aa5d35969943e2a6648e1a2bec8d3618c1ba25e134b6d8407a2b261"} Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.650017 5120 scope.go:117] "RemoveContainer" containerID="12de23dc8367ad8cb68c260a14425ae16c6c4b05ce1208c9744c48c7a3814bd0" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.650023 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.734089 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.734216 5120 scope.go:117] "RemoveContainer" containerID="def8f1d0d1f58ef3d13c999ddc952bd268ad1c6d1be4ab666cfcde1f32d97150" Jan 22 12:04:38 crc kubenswrapper[5120]: I0122 12:04:38.741982 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 22 12:04:39 crc kubenswrapper[5120]: I0122 12:04:39.583280 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" path="/var/lib/kubelet/pods/3528bca7-c1b4-485a-a9bd-240346daabf5/volumes" Jan 22 12:04:39 crc kubenswrapper[5120]: I0122 12:04:39.659817 5120 generic.go:358] "Generic (PLEG): container finished" podID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerID="5bbc946df07e08218832d593c225859e482f955978fd6e9a62ce7631704f808d" exitCode=0 Jan 22 12:04:39 crc kubenswrapper[5120]: I0122 12:04:39.659889 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"5bbc946df07e08218832d593c225859e482f955978fd6e9a62ce7631704f808d"} Jan 22 12:04:40 crc kubenswrapper[5120]: I0122 12:04:40.670448 5120 generic.go:358] "Generic (PLEG): container finished" podID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerID="cd887475f11acaa15c3251476f1ae3e6666ac309a6334a7d739d7beadfd34df8" exitCode=0 Jan 22 12:04:40 crc kubenswrapper[5120]: I0122 12:04:40.670518 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"cd887475f11acaa15c3251476f1ae3e6666ac309a6334a7d739d7beadfd34df8"} Jan 22 12:04:40 crc kubenswrapper[5120]: I0122 12:04:40.705120 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/manage-dockerfile/0.log" Jan 22 12:04:41 crc kubenswrapper[5120]: I0122 12:04:41.687234 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerStarted","Data":"b77ac776efac06fa0bd34abbb085e087408bdfdddc3f45473edcc558ebcb87c7"} Jan 22 12:04:41 crc kubenswrapper[5120]: I0122 12:04:41.718243 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=7.718223348 podStartE2EDuration="7.718223348s" podCreationTimestamp="2026-01-22 12:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:04:41.713328155 +0000 UTC m=+1016.457276486" watchObservedRunningTime="2026-01-22 12:04:41.718223348 +0000 UTC m=+1016.462171689" Jan 22 12:04:54 crc kubenswrapper[5120]: I0122 12:04:54.604783 5120 scope.go:117] "RemoveContainer" containerID="48535da82209ba80a74337bfe4adf5c3fb5d1066acf6b74856b7a35e8ae721fa" Jan 22 12:05:31 crc kubenswrapper[5120]: I0122 12:05:31.972689 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:05:31 crc kubenswrapper[5120]: I0122 12:05:31.973751 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:05:53 crc kubenswrapper[5120]: I0122 12:05:53.258211 5120 generic.go:358] "Generic (PLEG): container finished" podID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerID="b77ac776efac06fa0bd34abbb085e087408bdfdddc3f45473edcc558ebcb87c7" exitCode=0 Jan 22 12:05:53 crc kubenswrapper[5120]: I0122 12:05:53.258309 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"b77ac776efac06fa0bd34abbb085e087408bdfdddc3f45473edcc558ebcb87c7"} Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.605878 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669021 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669063 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669100 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669139 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669162 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669212 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669708 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669742 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669883 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669938 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.669985 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670014 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670042 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670098 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") pod \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\" (UID: \"379c9b40-0f89-404c-ba85-6b98c4a35a4f\") " Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670347 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670465 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670485 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.670495 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.672525 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.672584 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.672603 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677234 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677337 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677494 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.677544 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4" (OuterVolumeSpecName: "kube-api-access-lcjh4") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "kube-api-access-lcjh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771554 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771588 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771600 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771610 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/379c9b40-0f89-404c-ba85-6b98c4a35a4f-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771619 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771628 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcjh4\" (UniqueName: \"kubernetes.io/projected/379c9b40-0f89-404c-ba85-6b98c4a35a4f-kube-api-access-lcjh4\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.771638 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.893920 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:54 crc kubenswrapper[5120]: I0122 12:05:54.975627 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:55 crc kubenswrapper[5120]: I0122 12:05:55.278307 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"379c9b40-0f89-404c-ba85-6b98c4a35a4f","Type":"ContainerDied","Data":"04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030"} Jan 22 12:05:55 crc kubenswrapper[5120]: I0122 12:05:55.278421 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e7e7e8a9fa2535fb0fd7a0c0568914644353d2c942358647cf4740b49c2030" Jan 22 12:05:55 crc kubenswrapper[5120]: I0122 12:05:55.278332 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 22 12:05:56 crc kubenswrapper[5120]: I0122 12:05:56.695251 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "379c9b40-0f89-404c-ba85-6b98c4a35a4f" (UID: "379c9b40-0f89-404c-ba85-6b98c4a35a4f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:05:56 crc kubenswrapper[5120]: I0122 12:05:56.698811 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/379c9b40-0f89-404c-ba85-6b98c4a35a4f-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.948423 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949909 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949930 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949972 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949981 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.949997 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950004 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="manage-dockerfile" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950018 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="git-clone" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950025 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="git-clone" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950041 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950049 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950180 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="379c9b40-0f89-404c-ba85-6b98c4a35a4f" containerName="docker-build" Jan 22 12:05:59 crc kubenswrapper[5120]: I0122 12:05:59.950198 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3528bca7-c1b4-485a-a9bd-240346daabf5" containerName="docker-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.105277 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.105436 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.107574 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.108816 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.109153 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.109456 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.144275 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.148530 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.149002 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.151753 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.152034 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.152194 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157072 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157110 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157346 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157478 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157674 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157821 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157879 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157906 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.157997 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.158023 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.158054 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259723 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259776 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259856 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259873 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259908 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259938 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.259985 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260057 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260091 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260110 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260152 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"auto-csr-approver-29484726-c8lz2\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260191 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.260231 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.261515 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.261631 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.261722 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262011 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262158 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262345 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262392 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262509 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.262724 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.271685 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.271712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.276173 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"sg-core-1-build\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.361207 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"auto-csr-approver-29484726-c8lz2\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.378655 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"auto-csr-approver-29484726-c8lz2\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.441194 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.474017 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.720523 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:00 crc kubenswrapper[5120]: I0122 12:06:00.969640 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:06:00 crc kubenswrapper[5120]: W0122 12:06:00.972055 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3858bc47_7853_4b6a_b130_aea8f1f3e8c7.slice/crio-6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3 WatchSource:0}: Error finding container 6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3: Status 404 returned error can't find the container with id 6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3 Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.326278 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" event={"ID":"3858bc47-7853-4b6a-b130-aea8f1f3e8c7","Type":"ContainerStarted","Data":"6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3"} Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.328107 5120 generic.go:358] "Generic (PLEG): container finished" podID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerID="dc6449c955d62c9a3e099b456e3c0d923de6e758236bcfb769de9a44469f1bd0" exitCode=0 Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.328235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerDied","Data":"dc6449c955d62c9a3e099b456e3c0d923de6e758236bcfb769de9a44469f1bd0"} Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.328345 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerStarted","Data":"7818e737ee5ed95e5328c0dfb23b10ce422c0f3ef74c8c4836187c64df4a40cb"} Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.972373 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:06:01 crc kubenswrapper[5120]: I0122 12:06:01.972935 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:06:02 crc kubenswrapper[5120]: I0122 12:06:02.339302 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerStarted","Data":"0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628"} Jan 22 12:06:02 crc kubenswrapper[5120]: I0122 12:06:02.378051 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.378012266 podStartE2EDuration="3.378012266s" podCreationTimestamp="2026-01-22 12:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:06:02.371824957 +0000 UTC m=+1097.115773318" watchObservedRunningTime="2026-01-22 12:06:02.378012266 +0000 UTC m=+1097.121960617" Jan 22 12:06:03 crc kubenswrapper[5120]: I0122 12:06:03.348805 5120 generic.go:358] "Generic (PLEG): container finished" podID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerID="23dd071c493eb18691c5ccc422d25241938024f9dc9c51c1c687fd54070a5cca" exitCode=0 Jan 22 12:06:03 crc kubenswrapper[5120]: I0122 12:06:03.348923 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" event={"ID":"3858bc47-7853-4b6a-b130-aea8f1f3e8c7","Type":"ContainerDied","Data":"23dd071c493eb18691c5ccc422d25241938024f9dc9c51c1c687fd54070a5cca"} Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.637861 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.742362 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") pod \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\" (UID: \"3858bc47-7853-4b6a-b130-aea8f1f3e8c7\") " Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.751300 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w" (OuterVolumeSpecName: "kube-api-access-tgq6w") pod "3858bc47-7853-4b6a-b130-aea8f1f3e8c7" (UID: "3858bc47-7853-4b6a-b130-aea8f1f3e8c7"). InnerVolumeSpecName "kube-api-access-tgq6w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:06:04 crc kubenswrapper[5120]: I0122 12:06:04.844216 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgq6w\" (UniqueName: \"kubernetes.io/projected/3858bc47-7853-4b6a-b130-aea8f1f3e8c7-kube-api-access-tgq6w\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.379755 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" event={"ID":"3858bc47-7853-4b6a-b130-aea8f1f3e8c7","Type":"ContainerDied","Data":"6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3"} Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.379830 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b1486016505dc35b6016c4ba38a156b7fcc1795c088d7045dc89d90a41a12c3" Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.379779 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484726-c8lz2" Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.728248 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:06:05 crc kubenswrapper[5120]: I0122 12:06:05.734933 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484720-f92nq"] Jan 22 12:06:07 crc kubenswrapper[5120]: I0122 12:06:07.589671 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0a1780-1d96-46a3-8386-55404b6d1299" path="/var/lib/kubelet/pods/ee0a1780-1d96-46a3-8386-55404b6d1299/volumes" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.185484 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.186378 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" containerID="cri-o://0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628" gracePeriod=30 Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.415237 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_a36cb230-54e1-4799-a4a6-9009eaba532c/docker-build/0.log" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.415870 5120 generic.go:358] "Generic (PLEG): container finished" podID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerID="0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628" exitCode=1 Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.415997 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerDied","Data":"0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628"} Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.618820 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_a36cb230-54e1-4799-a4a6-9009eaba532c/docker-build/0.log" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.619206 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740352 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740455 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740553 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740641 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740755 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740798 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740839 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740890 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.740911 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741014 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741050 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741090 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741136 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") pod \"a36cb230-54e1-4799-a4a6-9009eaba532c\" (UID: \"a36cb230-54e1-4799-a4a6-9009eaba532c\") " Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.741220 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742054 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742076 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a36cb230-54e1-4799-a4a6-9009eaba532c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742346 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742368 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742364 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.742776 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.744119 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.748833 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv" (OuterVolumeSpecName: "kube-api-access-x4tfv") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "kube-api-access-x4tfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.748915 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.749072 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.817256 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843414 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x4tfv\" (UniqueName: \"kubernetes.io/projected/a36cb230-54e1-4799-a4a6-9009eaba532c-kube-api-access-x4tfv\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843463 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843504 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843513 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843523 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843532 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843543 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843557 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/a36cb230-54e1-4799-a4a6-9009eaba532c-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.843566 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a36cb230-54e1-4799-a4a6-9009eaba532c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.865880 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a36cb230-54e1-4799-a4a6-9009eaba532c" (UID: "a36cb230-54e1-4799-a4a6-9009eaba532c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:06:10 crc kubenswrapper[5120]: I0122 12:06:10.944433 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a36cb230-54e1-4799-a4a6-9009eaba532c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.424604 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_a36cb230-54e1-4799-a4a6-9009eaba532c/docker-build/0.log" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.425152 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.425201 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"a36cb230-54e1-4799-a4a6-9009eaba532c","Type":"ContainerDied","Data":"7818e737ee5ed95e5328c0dfb23b10ce422c0f3ef74c8c4836187c64df4a40cb"} Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.425299 5120 scope.go:117] "RemoveContainer" containerID="0fffe95a71f122423e973d688f499e166896cffbe78136ee95397c29b861b628" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.466996 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.469980 5120 scope.go:117] "RemoveContainer" containerID="dc6449c955d62c9a3e099b456e3c0d923de6e758236bcfb769de9a44469f1bd0" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.475894 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.581435 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" path="/var/lib/kubelet/pods/a36cb230-54e1-4799-a4a6-9009eaba532c/volumes" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.836454 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837722 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerName="oc" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837753 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerName="oc" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837795 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837804 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837817 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="manage-dockerfile" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837826 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="manage-dockerfile" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837974 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a36cb230-54e1-4799-a4a6-9009eaba532c" containerName="docker-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.837993 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" containerName="oc" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.858227 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.858462 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.862702 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.862742 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.863138 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.864510 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.961871 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.961981 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962099 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962140 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962175 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962206 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962262 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962311 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962386 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:11 crc kubenswrapper[5120]: I0122 12:06:11.962480 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064584 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064837 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.064972 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065009 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065081 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065126 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065124 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065226 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065280 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065339 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065883 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.065928 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.066275 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.066514 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.067217 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.067248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.067314 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.073331 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.076227 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.104741 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"sg-core-2-build\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.184270 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:06:12 crc kubenswrapper[5120]: I0122 12:06:12.449202 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 22 12:06:12 crc kubenswrapper[5120]: W0122 12:06:12.452278 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f1f5ecd_00ad_4747_b1eb_d701595508ad.slice/crio-eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd WatchSource:0}: Error finding container eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd: Status 404 returned error can't find the container with id eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd Jan 22 12:06:13 crc kubenswrapper[5120]: I0122 12:06:13.456895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerStarted","Data":"dfa599db095e5f8a988903fd8f1e1dd510e7a0654e6e4c200c8220e36442bda6"} Jan 22 12:06:13 crc kubenswrapper[5120]: I0122 12:06:13.456993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerStarted","Data":"eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd"} Jan 22 12:06:14 crc kubenswrapper[5120]: I0122 12:06:14.468578 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerID="dfa599db095e5f8a988903fd8f1e1dd510e7a0654e6e4c200c8220e36442bda6" exitCode=0 Jan 22 12:06:14 crc kubenswrapper[5120]: I0122 12:06:14.468721 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"dfa599db095e5f8a988903fd8f1e1dd510e7a0654e6e4c200c8220e36442bda6"} Jan 22 12:06:15 crc kubenswrapper[5120]: I0122 12:06:15.478445 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerID="91730ac168074586a3bede1ac6f7a0e951dd552d1fc754cd02e012bb515ca1c7" exitCode=0 Jan 22 12:06:15 crc kubenswrapper[5120]: I0122 12:06:15.478556 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"91730ac168074586a3bede1ac6f7a0e951dd552d1fc754cd02e012bb515ca1c7"} Jan 22 12:06:15 crc kubenswrapper[5120]: I0122 12:06:15.515543 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/manage-dockerfile/0.log" Jan 22 12:06:16 crc kubenswrapper[5120]: I0122 12:06:16.493553 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerStarted","Data":"b9f7e397919ba3cd7982a08e93e44e47e51c825517d4db01db3c212592a32a58"} Jan 22 12:06:16 crc kubenswrapper[5120]: I0122 12:06:16.542789 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.5427550629999995 podStartE2EDuration="5.542755063s" podCreationTimestamp="2026-01-22 12:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:06:16.530863548 +0000 UTC m=+1111.274811929" watchObservedRunningTime="2026-01-22 12:06:16.542755063 +0000 UTC m=+1111.286703444" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.972392 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973100 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973158 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973784 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:06:31 crc kubenswrapper[5120]: I0122 12:06:31.973842 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7" gracePeriod=600 Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.606901 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7" exitCode=0 Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.607536 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7"} Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.607568 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4"} Jan 22 12:06:32 crc kubenswrapper[5120]: I0122 12:06:32.607590 5120 scope.go:117] "RemoveContainer" containerID="7b1b1dbcaf6053c4f4e587f597b1d0bcb38e183b1d64f8acf48abb200ec2450a" Jan 22 12:06:54 crc kubenswrapper[5120]: I0122 12:06:54.757183 5120 scope.go:117] "RemoveContainer" containerID="a76aaf951602603ba06dd3faa64300e242c288026ffa56088b05a6f5a164c1d1" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.018648 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.021877 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.032195 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:07:46 crc kubenswrapper[5120]: I0122 12:07:46.032448 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.138288 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.269936 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.270125 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.277456 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.277592 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.277646 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.312910 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"auto-csr-approver-29484728-j8w4j\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.414171 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"auto-csr-approver-29484728-j8w4j\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.456527 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"auto-csr-approver-29484728-j8w4j\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.588529 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:00 crc kubenswrapper[5120]: I0122 12:08:00.858037 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:08:01 crc kubenswrapper[5120]: I0122 12:08:01.428600 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" event={"ID":"ba296aaf-56d0-49e4-b647-aae80f6fbd52","Type":"ContainerStarted","Data":"714967c8508e8b311da357f9c3b2c7250bcc38f92e52892f6dc0da12fc91017a"} Jan 22 12:08:02 crc kubenswrapper[5120]: I0122 12:08:02.437661 5120 generic.go:358] "Generic (PLEG): container finished" podID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerID="8c734d96e4b1f47996c023313a0ce278e60832df482833ed84ccfa06214e5cc6" exitCode=0 Jan 22 12:08:02 crc kubenswrapper[5120]: I0122 12:08:02.437743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" event={"ID":"ba296aaf-56d0-49e4-b647-aae80f6fbd52","Type":"ContainerDied","Data":"8c734d96e4b1f47996c023313a0ce278e60832df482833ed84ccfa06214e5cc6"} Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.728623 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.871756 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") pod \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\" (UID: \"ba296aaf-56d0-49e4-b647-aae80f6fbd52\") " Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.879389 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh" (OuterVolumeSpecName: "kube-api-access-2tfrh") pod "ba296aaf-56d0-49e4-b647-aae80f6fbd52" (UID: "ba296aaf-56d0-49e4-b647-aae80f6fbd52"). InnerVolumeSpecName "kube-api-access-2tfrh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:08:03 crc kubenswrapper[5120]: I0122 12:08:03.973632 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2tfrh\" (UniqueName: \"kubernetes.io/projected/ba296aaf-56d0-49e4-b647-aae80f6fbd52-kube-api-access-2tfrh\") on node \"crc\" DevicePath \"\"" Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.475018 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" event={"ID":"ba296aaf-56d0-49e4-b647-aae80f6fbd52","Type":"ContainerDied","Data":"714967c8508e8b311da357f9c3b2c7250bcc38f92e52892f6dc0da12fc91017a"} Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.475091 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714967c8508e8b311da357f9c3b2c7250bcc38f92e52892f6dc0da12fc91017a" Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.475201 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484728-j8w4j" Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.816541 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:08:04 crc kubenswrapper[5120]: I0122 12:08:04.825758 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484722-4kg69"] Jan 22 12:08:05 crc kubenswrapper[5120]: I0122 12:08:05.579834 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724f8cf0-a6c6-45cf-932a-0bdc0247b38f" path="/var/lib/kubelet/pods/724f8cf0-a6c6-45cf-932a-0bdc0247b38f/volumes" Jan 22 12:08:54 crc kubenswrapper[5120]: I0122 12:08:54.901254 5120 scope.go:117] "RemoveContainer" containerID="ceb1fb8314d94f06df7d317cf94cdc9dbae9c56f894e19873a0c9d4b5ac76d19" Jan 22 12:09:01 crc kubenswrapper[5120]: I0122 12:09:01.973070 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:09:01 crc kubenswrapper[5120]: I0122 12:09:01.975632 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:09:31 crc kubenswrapper[5120]: I0122 12:09:31.973321 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:09:31 crc kubenswrapper[5120]: I0122 12:09:31.974346 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.152054 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.154038 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerName="oc" Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.154064 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerName="oc" Jan 22 12:10:00 crc kubenswrapper[5120]: I0122 12:10:00.154255 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" containerName="oc" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.221744 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.222229 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.226843 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.227423 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.226880 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.366237 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"auto-csr-approver-29484730-z4qj9\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.467606 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"auto-csr-approver-29484730-z4qj9\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.503197 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"auto-csr-approver-29484730-z4qj9\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.558934 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.827785 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.839292 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.973400 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.973512 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.973575 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.974616 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:10:01 crc kubenswrapper[5120]: I0122 12:10:01.974921 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4" gracePeriod=600 Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.461284 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4" exitCode=0 Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.461378 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4"} Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.462031 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f"} Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.462064 5120 scope.go:117] "RemoveContainer" containerID="853669b192f5827170a3bbd5818f19fbda7dd2bb66abdc7a7f19541d0bf117e7" Jan 22 12:10:02 crc kubenswrapper[5120]: I0122 12:10:02.464519 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" event={"ID":"86fa02fb-d5af-46f8-b19a-9af5fd7e5353","Type":"ContainerStarted","Data":"b3f8f1387e023435dd3460361245306121979c47430ee1623d66a3ecdb1e5896"} Jan 22 12:10:03 crc kubenswrapper[5120]: I0122 12:10:03.472987 5120 generic.go:358] "Generic (PLEG): container finished" podID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerID="e435702e7c696c62fc24675d08a9198377bd5a0c61f1adb503efe9265edbf5bd" exitCode=0 Jan 22 12:10:03 crc kubenswrapper[5120]: I0122 12:10:03.473128 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" event={"ID":"86fa02fb-d5af-46f8-b19a-9af5fd7e5353","Type":"ContainerDied","Data":"e435702e7c696c62fc24675d08a9198377bd5a0c61f1adb503efe9265edbf5bd"} Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.730527 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.822261 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") pod \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\" (UID: \"86fa02fb-d5af-46f8-b19a-9af5fd7e5353\") " Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.833501 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh" (OuterVolumeSpecName: "kube-api-access-ztxvh") pod "86fa02fb-d5af-46f8-b19a-9af5fd7e5353" (UID: "86fa02fb-d5af-46f8-b19a-9af5fd7e5353"). InnerVolumeSpecName "kube-api-access-ztxvh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:04 crc kubenswrapper[5120]: I0122 12:10:04.924117 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztxvh\" (UniqueName: \"kubernetes.io/projected/86fa02fb-d5af-46f8-b19a-9af5fd7e5353-kube-api-access-ztxvh\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.499593 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" event={"ID":"86fa02fb-d5af-46f8-b19a-9af5fd7e5353","Type":"ContainerDied","Data":"b3f8f1387e023435dd3460361245306121979c47430ee1623d66a3ecdb1e5896"} Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.499638 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f8f1387e023435dd3460361245306121979c47430ee1623d66a3ecdb1e5896" Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.499677 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484730-z4qj9" Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.810968 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:10:05 crc kubenswrapper[5120]: I0122 12:10:05.816561 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484724-5shbh"] Jan 22 12:10:07 crc kubenswrapper[5120]: I0122 12:10:07.598666 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86909ba-6fe2-4fdd-994d-e5014840c597" path="/var/lib/kubelet/pods/b86909ba-6fe2-4fdd-994d-e5014840c597/volumes" Jan 22 12:10:09 crc kubenswrapper[5120]: I0122 12:10:09.547849 5120 generic.go:358] "Generic (PLEG): container finished" podID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerID="b9f7e397919ba3cd7982a08e93e44e47e51c825517d4db01db3c212592a32a58" exitCode=0 Jan 22 12:10:09 crc kubenswrapper[5120]: I0122 12:10:09.547987 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"b9f7e397919ba3cd7982a08e93e44e47e51c825517d4db01db3c212592a32a58"} Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.820179 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.921894 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922001 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922032 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922096 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922119 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922211 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922287 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922433 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922552 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922618 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.922685 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.923556 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.923689 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") pod \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\" (UID: \"4f1f5ecd-00ad-4747-b1eb-d701595508ad\") " Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924085 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924158 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924391 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924412 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924420 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4f1f5ecd-00ad-4747-b1eb-d701595508ad-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924429 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.924592 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.925043 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.929723 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.929757 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.931475 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm" (OuterVolumeSpecName: "kube-api-access-b2sgm") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "kube-api-access-b2sgm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:10 crc kubenswrapper[5120]: I0122 12:10:10.936274 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025837 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025883 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025894 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025904 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025916 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/4f1f5ecd-00ad-4747-b1eb-d701595508ad-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.025925 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b2sgm\" (UniqueName: \"kubernetes.io/projected/4f1f5ecd-00ad-4747-b1eb-d701595508ad-kube-api-access-b2sgm\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.290660 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.330394 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.565309 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"4f1f5ecd-00ad-4747-b1eb-d701595508ad","Type":"ContainerDied","Data":"eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd"} Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.565350 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5c6f13316b5753739a96c02f22620ad9a7455959acbd68aed8bb15ee7d4bbd" Jan 22 12:10:11 crc kubenswrapper[5120]: I0122 12:10:11.565326 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 22 12:10:13 crc kubenswrapper[5120]: I0122 12:10:13.383364 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4f1f5ecd-00ad-4747-b1eb-d701595508ad" (UID: "4f1f5ecd-00ad-4747-b1eb-d701595508ad"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:13 crc kubenswrapper[5120]: I0122 12:10:13.461917 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4f1f5ecd-00ad-4747-b1eb-d701595508ad-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.857598 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859411 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="docker-build" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859440 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="docker-build" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859482 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="manage-dockerfile" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859497 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="manage-dockerfile" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859512 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerName="oc" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859525 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerName="oc" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859548 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="git-clone" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859559 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="git-clone" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859733 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" containerName="oc" Jan 22 12:10:16 crc kubenswrapper[5120]: I0122 12:10:16.859760 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f1f5ecd-00ad-4747-b1eb-d701595508ad" containerName="docker-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.138018 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.138296 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.142108 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.142474 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.142989 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.143031 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219238 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219287 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219315 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219335 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219470 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219554 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219583 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219792 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219857 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.219924 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.220048 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.220091 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322192 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322327 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322379 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322434 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322503 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322552 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322669 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322730 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322802 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322867 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322925 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.322944 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.323053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.323354 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324617 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324679 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324882 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.324893 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.325216 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.325347 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.330473 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.330474 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.346620 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"sg-bridge-1-build\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.465938 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:17 crc kubenswrapper[5120]: I0122 12:10:17.660383 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:18 crc kubenswrapper[5120]: I0122 12:10:18.613090 5120 generic.go:358] "Generic (PLEG): container finished" podID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerID="fce5fcfa24b61113e059dbc8a86e3eae595fe68ab0c46f4e59c275faea435189" exitCode=0 Jan 22 12:10:18 crc kubenswrapper[5120]: I0122 12:10:18.613169 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerDied","Data":"fce5fcfa24b61113e059dbc8a86e3eae595fe68ab0c46f4e59c275faea435189"} Jan 22 12:10:18 crc kubenswrapper[5120]: I0122 12:10:18.613902 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerStarted","Data":"9264394407fc53c3467d3c267a00f3c26c21801d6472430023b8b496c2178810"} Jan 22 12:10:19 crc kubenswrapper[5120]: I0122 12:10:19.628655 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerStarted","Data":"57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce"} Jan 22 12:10:19 crc kubenswrapper[5120]: I0122 12:10:19.655844 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.655825665 podStartE2EDuration="3.655825665s" podCreationTimestamp="2026-01-22 12:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:10:19.654165125 +0000 UTC m=+1354.398113466" watchObservedRunningTime="2026-01-22 12:10:19.655825665 +0000 UTC m=+1354.399774006" Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.498514 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.499463 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" containerID="cri-o://57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce" gracePeriod=30 Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.690906 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/docker-build/0.log" Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.691834 5120 generic.go:358] "Generic (PLEG): container finished" podID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerID="57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce" exitCode=1 Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.692007 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerDied","Data":"57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce"} Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.934676 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/docker-build/0.log" Jan 22 12:10:27 crc kubenswrapper[5120]: I0122 12:10:27.936061 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094477 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094534 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094567 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094652 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094714 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094736 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094814 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094874 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.094913 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095030 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095095 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095147 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") pod \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\" (UID: \"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1\") " Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.095910 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.096981 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.097206 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.097784 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.098002 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.098092 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.098361 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.106817 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.106901 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8" (OuterVolumeSpecName: "kube-api-access-rtlq8") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "kube-api-access-rtlq8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.107178 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.163044 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197487 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197564 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtlq8\" (UniqueName: \"kubernetes.io/projected/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-kube-api-access-rtlq8\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197584 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197601 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197620 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197636 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197654 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197671 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197690 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197707 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.197728 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.260262 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" (UID: "c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.299575 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.705667 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/docker-build/0.log" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.707686 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.707727 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1","Type":"ContainerDied","Data":"9264394407fc53c3467d3c267a00f3c26c21801d6472430023b8b496c2178810"} Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.707871 5120 scope.go:117] "RemoveContainer" containerID="57e999484b6cb1f425a09d886e11c9aeca5c6f9d5ed91cc920bac2c4a290adce" Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.762639 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.773669 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 22 12:10:28 crc kubenswrapper[5120]: I0122 12:10:28.777622 5120 scope.go:117] "RemoveContainer" containerID="fce5fcfa24b61113e059dbc8a86e3eae595fe68ab0c46f4e59c275faea435189" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.058648 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059825 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059859 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059876 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="manage-dockerfile" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.059885 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="manage-dockerfile" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.060135 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" containerName="docker-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.085735 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.085975 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.089181 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.089626 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.089807 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.090006 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212309 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212369 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212400 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212434 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212457 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212477 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212496 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212517 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212635 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212897 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.212997 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.213037 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314517 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314648 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314746 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.314932 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315033 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315089 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315115 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315141 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315292 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315298 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315333 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315644 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315701 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315779 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.315733 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316017 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316054 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316155 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.316356 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.323752 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.328062 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.333992 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"sg-bridge-2-build\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.411862 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.582340 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1" path="/var/lib/kubelet/pods/c2cd56d3-48a3-4f60-9ebc-14e86f17e2a1/volumes" Jan 22 12:10:29 crc kubenswrapper[5120]: I0122 12:10:29.861386 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.612639 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.624787 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.626900 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.731714 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerStarted","Data":"1463c9e3d32959ee2e8e1d727c895c558456624cebafde2c110e96ea8ba9f4fd"} Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.731781 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerStarted","Data":"a90092222f318a0a87bcff1fc50be1c6c98f3209f37eda836b41e5226bcff2b0"} Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.775491 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.775599 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.775806 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.876787 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.876846 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.876891 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.877434 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.877516 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.901946 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"redhat-operators-z8lnh\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:30 crc kubenswrapper[5120]: I0122 12:10:30.976124 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.210434 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:31 crc kubenswrapper[5120]: W0122 12:10:31.217719 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b8619b7_91c0_4e9a_a414_e678f914250c.slice/crio-4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797 WatchSource:0}: Error finding container 4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797: Status 404 returned error can't find the container with id 4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797 Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.742639 5120 generic.go:358] "Generic (PLEG): container finished" podID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerID="1463c9e3d32959ee2e8e1d727c895c558456624cebafde2c110e96ea8ba9f4fd" exitCode=0 Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.742744 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"1463c9e3d32959ee2e8e1d727c895c558456624cebafde2c110e96ea8ba9f4fd"} Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.745872 5120 generic.go:358] "Generic (PLEG): container finished" podID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" exitCode=0 Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.746012 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636"} Jan 22 12:10:31 crc kubenswrapper[5120]: I0122 12:10:31.746086 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerStarted","Data":"4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797"} Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.409033 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.413645 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.430215 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.499868 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.499933 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.500012 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.602595 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.602760 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.602831 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.604652 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.605068 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.716764 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"certified-operators-kkhnz\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.733066 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.771818 5120 generic.go:358] "Generic (PLEG): container finished" podID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerID="6e84abcfc92c46cd9c05ade077fb1a9e87b366a03f1ae7450820d1f8b8b9c951" exitCode=0 Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.772046 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"6e84abcfc92c46cd9c05ade077fb1a9e87b366a03f1ae7450820d1f8b8b9c951"} Jan 22 12:10:32 crc kubenswrapper[5120]: I0122 12:10:32.819947 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/manage-dockerfile/0.log" Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.013728 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.784374 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerStarted","Data":"a944fcb82f3c6723cb691dd08da97990bc675ba7df78e295bfd7678975a8901f"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.787644 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerStarted","Data":"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.789716 5120 generic.go:358] "Generic (PLEG): container finished" podID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" exitCode=0 Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.789805 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.789830 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerStarted","Data":"8ec38ad3a9388cddc24e4a5a9f2b784e7e27aed1fbe43c2d56585e8290fcc036"} Jan 22 12:10:33 crc kubenswrapper[5120]: I0122 12:10:33.819681 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=4.819647592 podStartE2EDuration="4.819647592s" podCreationTimestamp="2026-01-22 12:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:10:33.816102796 +0000 UTC m=+1368.560051137" watchObservedRunningTime="2026-01-22 12:10:33.819647592 +0000 UTC m=+1368.563595933" Jan 22 12:10:36 crc kubenswrapper[5120]: I0122 12:10:36.816022 5120 generic.go:358] "Generic (PLEG): container finished" podID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" exitCode=0 Jan 22 12:10:36 crc kubenswrapper[5120]: I0122 12:10:36.816187 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204"} Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.825641 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerStarted","Data":"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8"} Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.827918 5120 generic.go:358] "Generic (PLEG): container finished" podID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" exitCode=0 Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.827948 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35"} Jan 22 12:10:37 crc kubenswrapper[5120]: I0122 12:10:37.848093 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8lnh" podStartSLOduration=7.00627784 podStartE2EDuration="7.848066124s" podCreationTimestamp="2026-01-22 12:10:30 +0000 UTC" firstStartedPulling="2026-01-22 12:10:31.74713191 +0000 UTC m=+1366.491080251" lastFinishedPulling="2026-01-22 12:10:32.588920194 +0000 UTC m=+1367.332868535" observedRunningTime="2026-01-22 12:10:37.845481842 +0000 UTC m=+1372.589430203" watchObservedRunningTime="2026-01-22 12:10:37.848066124 +0000 UTC m=+1372.592014465" Jan 22 12:10:38 crc kubenswrapper[5120]: I0122 12:10:38.840759 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerStarted","Data":"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03"} Jan 22 12:10:38 crc kubenswrapper[5120]: I0122 12:10:38.861354 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kkhnz" podStartSLOduration=3.815780525 podStartE2EDuration="6.86133017s" podCreationTimestamp="2026-01-22 12:10:32 +0000 UTC" firstStartedPulling="2026-01-22 12:10:33.790933386 +0000 UTC m=+1368.534881727" lastFinishedPulling="2026-01-22 12:10:36.836483031 +0000 UTC m=+1371.580431372" observedRunningTime="2026-01-22 12:10:38.86012425 +0000 UTC m=+1373.604072601" watchObservedRunningTime="2026-01-22 12:10:38.86133017 +0000 UTC m=+1373.605278511" Jan 22 12:10:40 crc kubenswrapper[5120]: I0122 12:10:40.976412 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:40 crc kubenswrapper[5120]: I0122 12:10:40.976852 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.028220 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8lnh" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" probeResult="failure" output=< Jan 22 12:10:42 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 12:10:42 crc kubenswrapper[5120]: > Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.733705 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.733796 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.781817 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:42 crc kubenswrapper[5120]: I0122 12:10:42.922459 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:43 crc kubenswrapper[5120]: I0122 12:10:43.024758 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:44 crc kubenswrapper[5120]: I0122 12:10:44.897853 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kkhnz" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" containerID="cri-o://b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" gracePeriod=2 Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.876446 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914729 5120 generic.go:358] "Generic (PLEG): container finished" podID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" exitCode=0 Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914858 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03"} Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914906 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kkhnz" event={"ID":"5dab9f1c-1f91-40c9-a40d-06e7e8573d49","Type":"ContainerDied","Data":"8ec38ad3a9388cddc24e4a5a9f2b784e7e27aed1fbe43c2d56585e8290fcc036"} Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.914931 5120 scope.go:117] "RemoveContainer" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.915221 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kkhnz" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.937740 5120 scope.go:117] "RemoveContainer" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.957376 5120 scope.go:117] "RemoveContainer" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.977805 5120 scope.go:117] "RemoveContainer" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" Jan 22 12:10:45 crc kubenswrapper[5120]: E0122 12:10:45.983777 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03\": container with ID starting with b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03 not found: ID does not exist" containerID="b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.983861 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03"} err="failed to get container status \"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03\": rpc error: code = NotFound desc = could not find container \"b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03\": container with ID starting with b2ba86375a4bb7a0aa9a92677b16cc8c1a5c3ff5702266557f68a9a4b302eb03 not found: ID does not exist" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.983914 5120 scope.go:117] "RemoveContainer" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" Jan 22 12:10:45 crc kubenswrapper[5120]: E0122 12:10:45.984490 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35\": container with ID starting with c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35 not found: ID does not exist" containerID="c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.984552 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35"} err="failed to get container status \"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35\": rpc error: code = NotFound desc = could not find container \"c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35\": container with ID starting with c50425756226163c21e659bfee62e0d70c6451a442ba076dc2bdce2146052e35 not found: ID does not exist" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.984595 5120 scope.go:117] "RemoveContainer" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" Jan 22 12:10:45 crc kubenswrapper[5120]: E0122 12:10:45.984941 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab\": container with ID starting with 40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab not found: ID does not exist" containerID="40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab" Jan 22 12:10:45 crc kubenswrapper[5120]: I0122 12:10:45.985017 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab"} err="failed to get container status \"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab\": rpc error: code = NotFound desc = could not find container \"40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab\": container with ID starting with 40a7ee616ac99fabf44657a14acf2735bea6cb79c20414651c409f5fe80651ab not found: ID does not exist" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.020833 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") pod \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.021127 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") pod \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.021178 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") pod \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\" (UID: \"5dab9f1c-1f91-40c9-a40d-06e7e8573d49\") " Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.023095 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities" (OuterVolumeSpecName: "utilities") pod "5dab9f1c-1f91-40c9-a40d-06e7e8573d49" (UID: "5dab9f1c-1f91-40c9-a40d-06e7e8573d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.038297 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j" (OuterVolumeSpecName: "kube-api-access-k6j2j") pod "5dab9f1c-1f91-40c9-a40d-06e7e8573d49" (UID: "5dab9f1c-1f91-40c9-a40d-06e7e8573d49"). InnerVolumeSpecName "kube-api-access-k6j2j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.067035 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dab9f1c-1f91-40c9-a40d-06e7e8573d49" (UID: "5dab9f1c-1f91-40c9-a40d-06e7e8573d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.123558 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.124122 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.124141 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6j2j\" (UniqueName: \"kubernetes.io/projected/5dab9f1c-1f91-40c9-a40d-06e7e8573d49-kube-api-access-k6j2j\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.261571 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:46 crc kubenswrapper[5120]: I0122 12:10:46.267571 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kkhnz"] Jan 22 12:10:47 crc kubenswrapper[5120]: I0122 12:10:47.582460 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" path="/var/lib/kubelet/pods/5dab9f1c-1f91-40c9-a40d-06e7e8573d49/volumes" Jan 22 12:10:51 crc kubenswrapper[5120]: I0122 12:10:51.022499 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:51 crc kubenswrapper[5120]: I0122 12:10:51.083085 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:51 crc kubenswrapper[5120]: I0122 12:10:51.262120 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:52 crc kubenswrapper[5120]: I0122 12:10:52.973416 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z8lnh" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" containerID="cri-o://891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" gracePeriod=2 Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.869158 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.940934 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") pod \"6b8619b7-91c0-4e9a-a414-e678f914250c\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.941081 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") pod \"6b8619b7-91c0-4e9a-a414-e678f914250c\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.941115 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") pod \"6b8619b7-91c0-4e9a-a414-e678f914250c\" (UID: \"6b8619b7-91c0-4e9a-a414-e678f914250c\") " Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.942595 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities" (OuterVolumeSpecName: "utilities") pod "6b8619b7-91c0-4e9a-a414-e678f914250c" (UID: "6b8619b7-91c0-4e9a-a414-e678f914250c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.949352 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp" (OuterVolumeSpecName: "kube-api-access-q7fhp") pod "6b8619b7-91c0-4e9a-a414-e678f914250c" (UID: "6b8619b7-91c0-4e9a-a414-e678f914250c"). InnerVolumeSpecName "kube-api-access-q7fhp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983130 5120 generic.go:358] "Generic (PLEG): container finished" podID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" exitCode=0 Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983229 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8"} Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983268 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8lnh" Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983728 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8lnh" event={"ID":"6b8619b7-91c0-4e9a-a414-e678f914250c","Type":"ContainerDied","Data":"4ce6e4d6f2d3d3291d1a8ba40c47213d89d0fd5a4214ae9bb9ffaace4e963797"} Jan 22 12:10:53 crc kubenswrapper[5120]: I0122 12:10:53.983766 5120 scope.go:117] "RemoveContainer" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.014827 5120 scope.go:117] "RemoveContainer" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.037739 5120 scope.go:117] "RemoveContainer" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.043073 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.043103 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7fhp\" (UniqueName: \"kubernetes.io/projected/6b8619b7-91c0-4e9a-a414-e678f914250c-kube-api-access-q7fhp\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.057338 5120 scope.go:117] "RemoveContainer" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" Jan 22 12:10:54 crc kubenswrapper[5120]: E0122 12:10:54.058174 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8\": container with ID starting with 891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8 not found: ID does not exist" containerID="891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058218 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8"} err="failed to get container status \"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8\": rpc error: code = NotFound desc = could not find container \"891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8\": container with ID starting with 891bb70a6859b729096a0949f1ada74f9a762a63f3bd5443cd4015da67b840e8 not found: ID does not exist" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058247 5120 scope.go:117] "RemoveContainer" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" Jan 22 12:10:54 crc kubenswrapper[5120]: E0122 12:10:54.058719 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204\": container with ID starting with 32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204 not found: ID does not exist" containerID="32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058803 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204"} err="failed to get container status \"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204\": rpc error: code = NotFound desc = could not find container \"32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204\": container with ID starting with 32f62d0c65734b64af94e4827830f897baf33245941f4b25bbe14b41edad7204 not found: ID does not exist" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.058903 5120 scope.go:117] "RemoveContainer" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" Jan 22 12:10:54 crc kubenswrapper[5120]: E0122 12:10:54.059305 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636\": container with ID starting with 77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636 not found: ID does not exist" containerID="77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.059343 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636"} err="failed to get container status \"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636\": rpc error: code = NotFound desc = could not find container \"77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636\": container with ID starting with 77b4241a97ef24270910ab0a023bf14cbe1e30799af840842fa0cccae0d72636 not found: ID does not exist" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.110673 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b8619b7-91c0-4e9a-a414-e678f914250c" (UID: "6b8619b7-91c0-4e9a-a414-e678f914250c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.144294 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b8619b7-91c0-4e9a-a414-e678f914250c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.324488 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:54 crc kubenswrapper[5120]: I0122 12:10:54.332983 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z8lnh"] Jan 22 12:10:55 crc kubenswrapper[5120]: I0122 12:10:55.047699 5120 scope.go:117] "RemoveContainer" containerID="ebc82e27b7ff9936fb8ab3baff996147f2e548280fc1e0007bc5efe24e9891e6" Jan 22 12:10:55 crc kubenswrapper[5120]: I0122 12:10:55.581369 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" path="/var/lib/kubelet/pods/6b8619b7-91c0-4e9a-a414-e678f914250c/volumes" Jan 22 12:11:34 crc kubenswrapper[5120]: I0122 12:11:34.341021 5120 generic.go:358] "Generic (PLEG): container finished" podID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerID="a944fcb82f3c6723cb691dd08da97990bc675ba7df78e295bfd7678975a8901f" exitCode=0 Jan 22 12:11:34 crc kubenswrapper[5120]: I0122 12:11:34.341136 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"a944fcb82f3c6723cb691dd08da97990bc675ba7df78e295bfd7678975a8901f"} Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.684656 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838054 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838169 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838204 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838310 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838392 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838415 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838442 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838492 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838514 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.838840 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839110 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839208 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839266 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") pod \"76125ec9-7200-4d9a-8632-4f6a653c434c\" (UID: \"76125ec9-7200-4d9a-8632-4f6a653c434c\") " Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.839735 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840140 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840162 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840176 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/76125ec9-7200-4d9a-8632-4f6a653c434c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840280 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.840578 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.841129 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.842305 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.847875 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.848279 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.848522 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw" (OuterVolumeSpecName: "kube-api-access-68kdw") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "kube-api-access-68kdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941305 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941347 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941357 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941367 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/76125ec9-7200-4d9a-8632-4f6a653c434c-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941377 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68kdw\" (UniqueName: \"kubernetes.io/projected/76125ec9-7200-4d9a-8632-4f6a653c434c-kube-api-access-68kdw\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941388 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.941397 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76125ec9-7200-4d9a-8632-4f6a653c434c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:35 crc kubenswrapper[5120]: I0122 12:11:35.951944 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.043242 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.359925 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"76125ec9-7200-4d9a-8632-4f6a653c434c","Type":"ContainerDied","Data":"a90092222f318a0a87bcff1fc50be1c6c98f3209f37eda836b41e5226bcff2b0"} Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.360001 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90092222f318a0a87bcff1fc50be1c6c98f3209f37eda836b41e5226bcff2b0" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.360012 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.640019 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "76125ec9-7200-4d9a-8632-4f6a653c434c" (UID: "76125ec9-7200-4d9a-8632-4f6a653c434c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:36 crc kubenswrapper[5120]: I0122 12:11:36.652237 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/76125ec9-7200-4d9a-8632-4f6a653c434c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.450228 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451156 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451175 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451190 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="git-clone" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451199 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="git-clone" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451215 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451222 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451231 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="manage-dockerfile" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451238 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="manage-dockerfile" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451250 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451258 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-content" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451267 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451274 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451289 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451297 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="extract-utilities" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451314 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451321 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451343 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="docker-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451349 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="docker-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451475 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b8619b7-91c0-4e9a-a414-e678f914250c" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451486 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5dab9f1c-1f91-40c9-a40d-06e7e8573d49" containerName="registry-server" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.451497 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="76125ec9-7200-4d9a-8632-4f6a653c434c" containerName="docker-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.469724 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.469890 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.472126 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.472160 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.472294 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.481556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612879 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612930 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612971 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.612988 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613020 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613072 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613319 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613389 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613564 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613770 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613807 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.613848 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.715909 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.716437 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.716686 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.717279 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.717731 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.717843 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718050 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718231 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718435 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718473 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718507 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718529 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718233 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718291 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718629 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718653 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.718836 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.719389 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.719585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.720350 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.724373 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.724439 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.733286 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:40 crc kubenswrapper[5120]: I0122 12:11:40.788524 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:41 crc kubenswrapper[5120]: I0122 12:11:41.220545 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:41 crc kubenswrapper[5120]: I0122 12:11:41.404179 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerStarted","Data":"ecb1085dcea5d3f742a090414f13d009680e89665e63e23de882bd7baa988a47"} Jan 22 12:11:42 crc kubenswrapper[5120]: I0122 12:11:42.412967 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerID="a5730f83c539302e9c7d05a91bf4d467541f23cec12c856f65dd4d2e326aaa3d" exitCode=0 Jan 22 12:11:42 crc kubenswrapper[5120]: I0122 12:11:42.414109 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerDied","Data":"a5730f83c539302e9c7d05a91bf4d467541f23cec12c856f65dd4d2e326aaa3d"} Jan 22 12:11:43 crc kubenswrapper[5120]: I0122 12:11:43.422355 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerStarted","Data":"ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e"} Jan 22 12:11:43 crc kubenswrapper[5120]: I0122 12:11:43.448363 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.448340345 podStartE2EDuration="3.448340345s" podCreationTimestamp="2026-01-22 12:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:11:43.447145976 +0000 UTC m=+1438.191094337" watchObservedRunningTime="2026-01-22 12:11:43.448340345 +0000 UTC m=+1438.192288686" Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.155440 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.156803 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" containerID="cri-o://ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e" gracePeriod=30 Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.499763 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/docker-build/0.log" Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.500741 5120 generic.go:358] "Generic (PLEG): container finished" podID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerID="ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e" exitCode=1 Jan 22 12:11:51 crc kubenswrapper[5120]: I0122 12:11:51.500849 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerDied","Data":"ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e"} Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.297752 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/docker-build/0.log" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.298503 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349068 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349222 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349250 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349432 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349472 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349572 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349619 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349658 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349711 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349736 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349785 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349865 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349896 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.349935 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") pod \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\" (UID: \"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc\") " Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350478 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350740 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350776 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350789 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.350995 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.359217 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.359501 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.364789 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.364838 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.365269 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.368122 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.372310 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r" (OuterVolumeSpecName: "kube-api-access-77m4r") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "kube-api-access-77m4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.407856 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" (UID: "ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452483 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452535 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452547 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452559 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452570 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77m4r\" (UniqueName: \"kubernetes.io/projected/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-kube-api-access-77m4r\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452580 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452593 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452606 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.452618 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.512411 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/docker-build/0.log" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.513131 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc","Type":"ContainerDied","Data":"ecb1085dcea5d3f742a090414f13d009680e89665e63e23de882bd7baa988a47"} Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.513208 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.513234 5120 scope.go:117] "RemoveContainer" containerID="ca94ec37f0f4f202255912d584c8b4e606005abacd20fa13f40d08f34546862e" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.551934 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.562774 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.566690 5120 scope.go:117] "RemoveContainer" containerID="a5730f83c539302e9c7d05a91bf4d467541f23cec12c856f65dd4d2e326aaa3d" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.849753 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850633 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="manage-dockerfile" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850657 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="manage-dockerfile" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850699 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850706 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" Jan 22 12:11:52 crc kubenswrapper[5120]: I0122 12:11:52.850837 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" containerName="docker-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.078375 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.078551 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.081486 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-hvzlm\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.081487 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.081814 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.082871 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176176 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176238 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176272 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176287 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176309 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176327 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176344 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176363 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176409 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176438 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176454 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.176475 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277565 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277709 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277740 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277756 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277796 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.277833 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278026 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278186 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278454 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.278454 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279133 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279223 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279502 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279729 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.279996 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280094 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280611 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.280787 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.282183 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.297975 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.297981 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.300864 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.395766 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.583440 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc" path="/var/lib/kubelet/pods/ec3ec3c7-4cc7-41f1-a9e9-cc212facfcfc/volumes" Jan 22 12:11:53 crc kubenswrapper[5120]: I0122 12:11:53.832297 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 22 12:11:54 crc kubenswrapper[5120]: I0122 12:11:54.540412 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerStarted","Data":"05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5"} Jan 22 12:11:54 crc kubenswrapper[5120]: I0122 12:11:54.541142 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerStarted","Data":"3627cdac332d73eb137fa0d159e92cfafa1ce8488fa859f8ecc3dc50e6b5ea86"} Jan 22 12:11:54 crc kubenswrapper[5120]: E0122 12:11:54.746510 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec972f4_74cd_403c_a0a5_2e56146e5aa2.slice/crio-05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5.scope\": RecentStats: unable to find data in memory cache]" Jan 22 12:11:55 crc kubenswrapper[5120]: I0122 12:11:55.577509 5120 generic.go:358] "Generic (PLEG): container finished" podID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerID="05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5" exitCode=0 Jan 22 12:11:55 crc kubenswrapper[5120]: I0122 12:11:55.595578 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"05e7e3b8266433c149cdb4da43cac90ceffe24f2688ba6644117672b730ee9e5"} Jan 22 12:11:56 crc kubenswrapper[5120]: I0122 12:11:56.588034 5120 generic.go:358] "Generic (PLEG): container finished" podID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerID="83551d9292ded413ca21fc8aea98430e64cb1d14c3daa2a17a085a00e029936a" exitCode=0 Jan 22 12:11:56 crc kubenswrapper[5120]: I0122 12:11:56.588282 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"83551d9292ded413ca21fc8aea98430e64cb1d14c3daa2a17a085a00e029936a"} Jan 22 12:11:56 crc kubenswrapper[5120]: I0122 12:11:56.640892 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/manage-dockerfile/0.log" Jan 22 12:11:57 crc kubenswrapper[5120]: I0122 12:11:57.602749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerStarted","Data":"7e060d71e2f1980f392f5ce6385ea239d85b0d5f2ce92a364866fef48791e99c"} Jan 22 12:11:57 crc kubenswrapper[5120]: I0122 12:11:57.636614 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.636586576 podStartE2EDuration="5.636586576s" podCreationTimestamp="2026-01-22 12:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:11:57.632869185 +0000 UTC m=+1452.376817546" watchObservedRunningTime="2026-01-22 12:11:57.636586576 +0000 UTC m=+1452.380534937" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.139563 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.532902 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.533019 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.537028 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.538713 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.542885 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.594824 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"auto-csr-approver-29484732-pmd7b\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.696838 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"auto-csr-approver-29484732-pmd7b\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.721259 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"auto-csr-approver-29484732-pmd7b\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:00 crc kubenswrapper[5120]: I0122 12:12:00.856943 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:01 crc kubenswrapper[5120]: I0122 12:12:01.065534 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:12:01 crc kubenswrapper[5120]: I0122 12:12:01.637740 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" event={"ID":"2284d302-27de-4f84-9cd9-0b27dc76e987","Type":"ContainerStarted","Data":"c8ef452b457358c81bf8f5146854a6437155b070b39cdb1c8d13771b0583a114"} Jan 22 12:12:10 crc kubenswrapper[5120]: I0122 12:12:10.738082 5120 generic.go:358] "Generic (PLEG): container finished" podID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerID="afab18be716ae606d212e93ff4cb99381fd77d17295864dd09555b0262bbf573" exitCode=0 Jan 22 12:12:10 crc kubenswrapper[5120]: I0122 12:12:10.738240 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" event={"ID":"2284d302-27de-4f84-9cd9-0b27dc76e987","Type":"ContainerDied","Data":"afab18be716ae606d212e93ff4cb99381fd77d17295864dd09555b0262bbf573"} Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.002139 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.103663 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") pod \"2284d302-27de-4f84-9cd9-0b27dc76e987\" (UID: \"2284d302-27de-4f84-9cd9-0b27dc76e987\") " Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.131129 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4" (OuterVolumeSpecName: "kube-api-access-8l8k4") pod "2284d302-27de-4f84-9cd9-0b27dc76e987" (UID: "2284d302-27de-4f84-9cd9-0b27dc76e987"). InnerVolumeSpecName "kube-api-access-8l8k4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.205934 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8l8k4\" (UniqueName: \"kubernetes.io/projected/2284d302-27de-4f84-9cd9-0b27dc76e987-kube-api-access-8l8k4\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.754804 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.754838 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484732-pmd7b" event={"ID":"2284d302-27de-4f84-9cd9-0b27dc76e987","Type":"ContainerDied","Data":"c8ef452b457358c81bf8f5146854a6437155b070b39cdb1c8d13771b0583a114"} Jan 22 12:12:12 crc kubenswrapper[5120]: I0122 12:12:12.754871 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ef452b457358c81bf8f5146854a6437155b070b39cdb1c8d13771b0583a114" Jan 22 12:12:13 crc kubenswrapper[5120]: I0122 12:12:13.072985 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:12:13 crc kubenswrapper[5120]: I0122 12:12:13.080242 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484726-c8lz2"] Jan 22 12:12:13 crc kubenswrapper[5120]: I0122 12:12:13.580039 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3858bc47-7853-4b6a-b130-aea8f1f3e8c7" path="/var/lib/kubelet/pods/3858bc47-7853-4b6a-b130-aea8f1f3e8c7/volumes" Jan 22 12:12:31 crc kubenswrapper[5120]: I0122 12:12:31.973057 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:12:31 crc kubenswrapper[5120]: I0122 12:12:31.974789 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.284502 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.285568 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerName="oc" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.285584 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerName="oc" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.285705 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" containerName="oc" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.289597 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.305753 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.333117 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.333172 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.333232 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434312 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434408 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.434982 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.435080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.456248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"community-operators-zqrjj\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.606113 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.901475 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:33 crc kubenswrapper[5120]: I0122 12:12:33.937895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerStarted","Data":"78da9bd235cb1cbf9c32214297c3faf8f9a8e55366f2f87f0a281db0b912c76d"} Jan 22 12:12:34 crc kubenswrapper[5120]: I0122 12:12:34.946389 5120 generic.go:358] "Generic (PLEG): container finished" podID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerID="15919de601a02f8d7223de39029e9f611fffc4c72aadf9633b4a376bf9bd33e5" exitCode=0 Jan 22 12:12:34 crc kubenswrapper[5120]: I0122 12:12:34.946778 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"15919de601a02f8d7223de39029e9f611fffc4c72aadf9633b4a376bf9bd33e5"} Jan 22 12:12:35 crc kubenswrapper[5120]: I0122 12:12:35.955260 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerStarted","Data":"47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab"} Jan 22 12:12:36 crc kubenswrapper[5120]: I0122 12:12:36.964288 5120 generic.go:358] "Generic (PLEG): container finished" podID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerID="47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab" exitCode=0 Jan 22 12:12:36 crc kubenswrapper[5120]: I0122 12:12:36.964407 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab"} Jan 22 12:12:37 crc kubenswrapper[5120]: I0122 12:12:37.975631 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerStarted","Data":"4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b"} Jan 22 12:12:38 crc kubenswrapper[5120]: I0122 12:12:38.007012 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zqrjj" podStartSLOduration=4.307357092 podStartE2EDuration="5.006937045s" podCreationTimestamp="2026-01-22 12:12:33 +0000 UTC" firstStartedPulling="2026-01-22 12:12:34.947775335 +0000 UTC m=+1489.691723686" lastFinishedPulling="2026-01-22 12:12:35.647355298 +0000 UTC m=+1490.391303639" observedRunningTime="2026-01-22 12:12:38.004325522 +0000 UTC m=+1492.748273863" watchObservedRunningTime="2026-01-22 12:12:38.006937045 +0000 UTC m=+1492.750885426" Jan 22 12:12:43 crc kubenswrapper[5120]: I0122 12:12:43.606894 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:43 crc kubenswrapper[5120]: I0122 12:12:43.607579 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:43 crc kubenswrapper[5120]: I0122 12:12:43.679818 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:44 crc kubenswrapper[5120]: I0122 12:12:44.083522 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:44 crc kubenswrapper[5120]: I0122 12:12:44.131587 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:46 crc kubenswrapper[5120]: I0122 12:12:46.052899 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zqrjj" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" containerID="cri-o://4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b" gracePeriod=2 Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.696190 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.696401 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.781496 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:12:47 crc kubenswrapper[5120]: I0122 12:12:47.781496 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.073563 5120 generic.go:358] "Generic (PLEG): container finished" podID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerID="4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b" exitCode=0 Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.073674 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b"} Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.428661 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.479169 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") pod \"28308e30-8c83-4b30-93e3-1aff509cf1dc\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.479233 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") pod \"28308e30-8c83-4b30-93e3-1aff509cf1dc\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.479311 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") pod \"28308e30-8c83-4b30-93e3-1aff509cf1dc\" (UID: \"28308e30-8c83-4b30-93e3-1aff509cf1dc\") " Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.481473 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities" (OuterVolumeSpecName: "utilities") pod "28308e30-8c83-4b30-93e3-1aff509cf1dc" (UID: "28308e30-8c83-4b30-93e3-1aff509cf1dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.496257 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z" (OuterVolumeSpecName: "kube-api-access-dtg7z") pod "28308e30-8c83-4b30-93e3-1aff509cf1dc" (UID: "28308e30-8c83-4b30-93e3-1aff509cf1dc"). InnerVolumeSpecName "kube-api-access-dtg7z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.541757 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28308e30-8c83-4b30-93e3-1aff509cf1dc" (UID: "28308e30-8c83-4b30-93e3-1aff509cf1dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.581294 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dtg7z\" (UniqueName: \"kubernetes.io/projected/28308e30-8c83-4b30-93e3-1aff509cf1dc-kube-api-access-dtg7z\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.581342 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:48 crc kubenswrapper[5120]: I0122 12:12:48.581353 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28308e30-8c83-4b30-93e3-1aff509cf1dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.117429 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zqrjj" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.116935 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zqrjj" event={"ID":"28308e30-8c83-4b30-93e3-1aff509cf1dc","Type":"ContainerDied","Data":"78da9bd235cb1cbf9c32214297c3faf8f9a8e55366f2f87f0a281db0b912c76d"} Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.118813 5120 scope.go:117] "RemoveContainer" containerID="4cd7465cca5cd8c5318043e47ace75ae43787daaa2e75da3ea2f58c07fca3b5b" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.157612 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.165019 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zqrjj"] Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.385722 5120 scope.go:117] "RemoveContainer" containerID="47c71f822e85394e50e7509e7e9e00925405fc12ea2e622022d8d286450cedab" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.406275 5120 scope.go:117] "RemoveContainer" containerID="15919de601a02f8d7223de39029e9f611fffc4c72aadf9633b4a376bf9bd33e5" Jan 22 12:12:49 crc kubenswrapper[5120]: I0122 12:12:49.581085 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" path="/var/lib/kubelet/pods/28308e30-8c83-4b30-93e3-1aff509cf1dc/volumes" Jan 22 12:12:55 crc kubenswrapper[5120]: I0122 12:12:55.241680 5120 scope.go:117] "RemoveContainer" containerID="23dd071c493eb18691c5ccc422d25241938024f9dc9c51c1c687fd54070a5cca" Jan 22 12:13:01 crc kubenswrapper[5120]: I0122 12:13:01.972746 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:13:01 crc kubenswrapper[5120]: I0122 12:13:01.973764 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:13:05 crc kubenswrapper[5120]: I0122 12:13:05.278851 5120 generic.go:358] "Generic (PLEG): container finished" podID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerID="7e060d71e2f1980f392f5ce6385ea239d85b0d5f2ce92a364866fef48791e99c" exitCode=0 Jan 22 12:13:05 crc kubenswrapper[5120]: I0122 12:13:05.279045 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"7e060d71e2f1980f392f5ce6385ea239d85b0d5f2ce92a364866fef48791e99c"} Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.575917 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602147 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602244 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602281 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602323 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602343 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602392 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602438 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602483 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602600 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602664 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.602683 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") pod \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\" (UID: \"aec972f4-74cd-403c-a0a5-2e56146e5aa2\") " Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.603174 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.604985 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.605016 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.605102 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.606545 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.606906 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.608922 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.613407 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-push") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "builder-dockercfg-hvzlm-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.613937 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m" (OuterVolumeSpecName: "kube-api-access-q9j5m") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "kube-api-access-q9j5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.616744 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull" (OuterVolumeSpecName: "builder-dockercfg-hvzlm-pull") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "builder-dockercfg-hvzlm-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704743 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9j5m\" (UniqueName: \"kubernetes.io/projected/aec972f4-74cd-403c-a0a5-2e56146e5aa2-kube-api-access-q9j5m\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704807 5120 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704834 5120 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704861 5120 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704884 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704907 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-pull\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-pull\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704932 5120 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.704983 5120 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/aec972f4-74cd-403c-a0a5-2e56146e5aa2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.705008 5120 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.705032 5120 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-hvzlm-push\" (UniqueName: \"kubernetes.io/secret/aec972f4-74cd-403c-a0a5-2e56146e5aa2-builder-dockercfg-hvzlm-push\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.745165 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:06 crc kubenswrapper[5120]: I0122 12:13:06.806671 5120 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.300196 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"aec972f4-74cd-403c-a0a5-2e56146e5aa2","Type":"ContainerDied","Data":"3627cdac332d73eb137fa0d159e92cfafa1ce8488fa859f8ecc3dc50e6b5ea86"} Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.300247 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3627cdac332d73eb137fa0d159e92cfafa1ce8488fa859f8ecc3dc50e6b5ea86" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.300332 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.642311 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "aec972f4-74cd-403c-a0a5-2e56146e5aa2" (UID: "aec972f4-74cd-403c-a0a5-2e56146e5aa2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:13:07 crc kubenswrapper[5120]: I0122 12:13:07.723347 5120 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/aec972f4-74cd-403c-a0a5-2e56146e5aa2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.537066 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-84c66d88-wp5jc"] Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538212 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="docker-build" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538233 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="docker-build" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538248 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-utilities" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538256 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-utilities" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538289 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="manage-dockerfile" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538301 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="manage-dockerfile" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538315 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="git-clone" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538323 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="git-clone" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538333 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538340 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538357 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-content" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538364 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="extract-content" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538506 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="aec972f4-74cd-403c-a0a5-2e56146e5aa2" containerName="docker-build" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.538526 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="28308e30-8c83-4b30-93e3-1aff509cf1dc" containerName="registry-server" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.547807 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.551556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-8gw2f\"" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.553011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-84c66d88-wp5jc"] Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.600818 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f9d3100-17a5-4c92-bf93-17c74efea49f-runner\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.600893 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4wv\" (UniqueName: \"kubernetes.io/projected/8f9d3100-17a5-4c92-bf93-17c74efea49f-kube-api-access-lx4wv\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.702369 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4wv\" (UniqueName: \"kubernetes.io/projected/8f9d3100-17a5-4c92-bf93-17c74efea49f-kube-api-access-lx4wv\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.703517 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f9d3100-17a5-4c92-bf93-17c74efea49f-runner\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.704814 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f9d3100-17a5-4c92-bf93-17c74efea49f-runner\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.727449 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4wv\" (UniqueName: \"kubernetes.io/projected/8f9d3100-17a5-4c92-bf93-17c74efea49f-kube-api-access-lx4wv\") pod \"smart-gateway-operator-84c66d88-wp5jc\" (UID: \"8f9d3100-17a5-4c92-bf93-17c74efea49f\") " pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:12 crc kubenswrapper[5120]: I0122 12:13:12.881819 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" Jan 22 12:13:13 crc kubenswrapper[5120]: I0122 12:13:13.145815 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-84c66d88-wp5jc"] Jan 22 12:13:13 crc kubenswrapper[5120]: W0122 12:13:13.151177 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9d3100_17a5_4c92_bf93_17c74efea49f.slice/crio-c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c WatchSource:0}: Error finding container c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c: Status 404 returned error can't find the container with id c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c Jan 22 12:13:13 crc kubenswrapper[5120]: I0122 12:13:13.348827 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" event={"ID":"8f9d3100-17a5-4c92-bf93-17c74efea49f","Type":"ContainerStarted","Data":"c1ad324f0f10379d7e4bf1f0b32fbd2b35710a26419db0a7d2984f3d32503f9c"} Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.127025 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-69f575f8bc-9msdn"] Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.720765 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-69f575f8bc-9msdn"] Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.720927 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.723053 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-tzsgp\"" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.876126 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dxrk\" (UniqueName: \"kubernetes.io/projected/71c6d75c-6634-4017-92b9-487a57bcc47b-kube-api-access-6dxrk\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.876201 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/71c6d75c-6634-4017-92b9-487a57bcc47b-runner\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.978043 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6dxrk\" (UniqueName: \"kubernetes.io/projected/71c6d75c-6634-4017-92b9-487a57bcc47b-kube-api-access-6dxrk\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.978241 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/71c6d75c-6634-4017-92b9-487a57bcc47b-runner\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:16 crc kubenswrapper[5120]: I0122 12:13:16.978980 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/71c6d75c-6634-4017-92b9-487a57bcc47b-runner\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:17 crc kubenswrapper[5120]: I0122 12:13:17.009599 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dxrk\" (UniqueName: \"kubernetes.io/projected/71c6d75c-6634-4017-92b9-487a57bcc47b-kube-api-access-6dxrk\") pod \"service-telemetry-operator-69f575f8bc-9msdn\" (UID: \"71c6d75c-6634-4017-92b9-487a57bcc47b\") " pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:17 crc kubenswrapper[5120]: I0122 12:13:17.040184 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" Jan 22 12:13:27 crc kubenswrapper[5120]: I0122 12:13:27.120180 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-69f575f8bc-9msdn"] Jan 22 12:13:28 crc kubenswrapper[5120]: I0122 12:13:28.494199 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" event={"ID":"71c6d75c-6634-4017-92b9-487a57bcc47b","Type":"ContainerStarted","Data":"a05c553685a347cc1108be355ff912afacd0408d86bf990855d241612c189e06"} Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.972566 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.972662 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.972732 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.973571 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:13:31 crc kubenswrapper[5120]: I0122 12:13:31.973633 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f" gracePeriod=600 Jan 22 12:13:33 crc kubenswrapper[5120]: I0122 12:13:33.534004 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f" exitCode=0 Jan 22 12:13:33 crc kubenswrapper[5120]: I0122 12:13:33.534097 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f"} Jan 22 12:13:33 crc kubenswrapper[5120]: I0122 12:13:33.534167 5120 scope.go:117] "RemoveContainer" containerID="0ce45fe111abe3fb25265c0d4114782f8899115da5ec0e060bbf1264c0bf05d4" Jan 22 12:13:34 crc kubenswrapper[5120]: I0122 12:13:34.545640 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" event={"ID":"8f9d3100-17a5-4c92-bf93-17c74efea49f","Type":"ContainerStarted","Data":"15bc519c44c271587bd2ef9f8859c7f75171cb70dd45fe5bd26e4304eb0c6206"} Jan 22 12:13:34 crc kubenswrapper[5120]: I0122 12:13:34.550855 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d"} Jan 22 12:13:34 crc kubenswrapper[5120]: I0122 12:13:34.569088 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-84c66d88-wp5jc" podStartSLOduration=1.762455828 podStartE2EDuration="22.569063215s" podCreationTimestamp="2026-01-22 12:13:12 +0000 UTC" firstStartedPulling="2026-01-22 12:13:13.152858876 +0000 UTC m=+1527.896807207" lastFinishedPulling="2026-01-22 12:13:33.959466243 +0000 UTC m=+1548.703414594" observedRunningTime="2026-01-22 12:13:34.565088099 +0000 UTC m=+1549.309036440" watchObservedRunningTime="2026-01-22 12:13:34.569063215 +0000 UTC m=+1549.313011556" Jan 22 12:13:40 crc kubenswrapper[5120]: I0122 12:13:40.616389 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" event={"ID":"71c6d75c-6634-4017-92b9-487a57bcc47b","Type":"ContainerStarted","Data":"3419440dd3ec67879ab184544f0d29d3207e2973d23ba74b4c204745af173815"} Jan 22 12:13:40 crc kubenswrapper[5120]: I0122 12:13:40.640328 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-69f575f8bc-9msdn" podStartSLOduration=11.906733284 podStartE2EDuration="24.640310764s" podCreationTimestamp="2026-01-22 12:13:16 +0000 UTC" firstStartedPulling="2026-01-22 12:13:27.47081654 +0000 UTC m=+1542.214764881" lastFinishedPulling="2026-01-22 12:13:40.204394 +0000 UTC m=+1554.948342361" observedRunningTime="2026-01-22 12:13:40.636059851 +0000 UTC m=+1555.380008212" watchObservedRunningTime="2026-01-22 12:13:40.640310764 +0000 UTC m=+1555.384259105" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.156827 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.755890 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.756201 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.760398 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.762232 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.762262 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.878432 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"auto-csr-approver-29484734-7jmnm\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:00 crc kubenswrapper[5120]: I0122 12:14:00.980969 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"auto-csr-approver-29484734-7jmnm\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.003806 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"auto-csr-approver-29484734-7jmnm\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.095783 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.321522 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:14:01 crc kubenswrapper[5120]: I0122 12:14:01.785703 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" event={"ID":"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a","Type":"ContainerStarted","Data":"457680467a87c168acb336fde84c6785d065ccc55d5d03b07ac77578c2019e6f"} Jan 22 12:14:02 crc kubenswrapper[5120]: I0122 12:14:02.795896 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerID="21b98295bffce8d00861339ce4655dd1e74538d2d7b8c008a2e3013d23d808e0" exitCode=0 Jan 22 12:14:02 crc kubenswrapper[5120]: I0122 12:14:02.796014 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" event={"ID":"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a","Type":"ContainerDied","Data":"21b98295bffce8d00861339ce4655dd1e74538d2d7b8c008a2e3013d23d808e0"} Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.048675 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.125604 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") pod \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\" (UID: \"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a\") " Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.137052 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7" (OuterVolumeSpecName: "kube-api-access-7qkj7") pod "2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" (UID: "2c1b3bc9-3782-474e-a90c-86f0ba86fa6a"). InnerVolumeSpecName "kube-api-access-7qkj7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.226980 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7qkj7\" (UniqueName: \"kubernetes.io/projected/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a-kube-api-access-7qkj7\") on node \"crc\" DevicePath \"\"" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.814545 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" event={"ID":"2c1b3bc9-3782-474e-a90c-86f0ba86fa6a","Type":"ContainerDied","Data":"457680467a87c168acb336fde84c6785d065ccc55d5d03b07ac77578c2019e6f"} Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.814593 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="457680467a87c168acb336fde84c6785d065ccc55d5d03b07ac77578c2019e6f" Jan 22 12:14:04 crc kubenswrapper[5120]: I0122 12:14:04.814684 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484734-7jmnm" Jan 22 12:14:05 crc kubenswrapper[5120]: I0122 12:14:05.113809 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:14:05 crc kubenswrapper[5120]: I0122 12:14:05.119756 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484728-j8w4j"] Jan 22 12:14:05 crc kubenswrapper[5120]: I0122 12:14:05.583243 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba296aaf-56d0-49e4-b647-aae80f6fbd52" path="/var/lib/kubelet/pods/ba296aaf-56d0-49e4-b647-aae80f6fbd52/volumes" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.581459 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.582932 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerName="oc" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.582978 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerName="oc" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.583149 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" containerName="oc" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.598780 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.598939 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.601755 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602413 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602457 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602529 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-2nlrp\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.602648 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.603057 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.603164 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696540 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696614 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696649 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696669 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696761 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696832 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.696873 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798783 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798851 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798879 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.798906 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.799039 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.799093 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.799130 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.800546 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.809156 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.809206 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.810267 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.811921 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.817499 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.820248 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"default-interconnect-55bf8d5cb-zgrdr\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:13 crc kubenswrapper[5120]: I0122 12:14:13.922461 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:14:14 crc kubenswrapper[5120]: I0122 12:14:14.350486 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:14:14 crc kubenswrapper[5120]: I0122 12:14:14.893975 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerStarted","Data":"11628814cb12f23bc6c37dd57728341ba4c21021b5a9ed812a9f0c32aac8439a"} Jan 22 12:14:19 crc kubenswrapper[5120]: I0122 12:14:19.937301 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerStarted","Data":"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3"} Jan 22 12:14:19 crc kubenswrapper[5120]: I0122 12:14:19.969159 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" podStartSLOduration=2.009468003 podStartE2EDuration="6.969121801s" podCreationTimestamp="2026-01-22 12:14:13 +0000 UTC" firstStartedPulling="2026-01-22 12:14:14.356340281 +0000 UTC m=+1589.100288622" lastFinishedPulling="2026-01-22 12:14:19.315994079 +0000 UTC m=+1594.059942420" observedRunningTime="2026-01-22 12:14:19.965188986 +0000 UTC m=+1594.709137347" watchObservedRunningTime="2026-01-22 12:14:19.969121801 +0000 UTC m=+1594.713070162" Jan 22 12:14:24 crc kubenswrapper[5120]: I0122 12:14:24.808018 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.620133 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.620498 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.625622 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.625820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.625637 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.626165 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.628118 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.628199 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.630017 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.630163 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-r88wg\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.630597 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.633987 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.713997 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714088 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714158 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-tls-assets\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714200 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714267 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-web-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714385 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714423 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714507 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714554 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714603 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af3a73d7-3578-4530-9916-0c3613d55591-config-out\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714673 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6xz5\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-kube-api-access-j6xz5\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.714737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816697 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816812 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816859 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.816904 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817164 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af3a73d7-3578-4530-9916-0c3613d55591-config-out\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817333 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j6xz5\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-kube-api-access-j6xz5\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817387 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817537 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.817581 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818171 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818240 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-tls-assets\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818281 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818357 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-web-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818369 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.818437 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: E0122 12:14:25.818912 5120 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 22 12:14:25 crc kubenswrapper[5120]: E0122 12:14:25.819098 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls podName:af3a73d7-3578-4530-9916-0c3613d55591 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:26.31906456 +0000 UTC m=+1601.063012921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "af3a73d7-3578-4530-9916-0c3613d55591") : secret "default-prometheus-proxy-tls" not found Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.819127 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/af3a73d7-3578-4530-9916-0c3613d55591-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.825717 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.825766 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/516a3b7b844e4c3dd8240e8a8a3b1694cea78fced0a6ec1a814e8c4102adf5e0/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.825951 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-tls-assets\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.829245 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/af3a73d7-3578-4530-9916-0c3613d55591-config-out\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.830210 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.835741 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.849797 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-web-config\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.861036 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6xz5\" (UniqueName: \"kubernetes.io/projected/af3a73d7-3578-4530-9916-0c3613d55591-kube-api-access-j6xz5\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:25 crc kubenswrapper[5120]: I0122 12:14:25.868332 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-84a3aaf7-fa67-41ac-a74f-0e48eff03333\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.329220 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.336241 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/af3a73d7-3578-4530-9916-0c3613d55591-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"af3a73d7-3578-4530-9916-0c3613d55591\") " pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.544735 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.812667 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 22 12:14:26 crc kubenswrapper[5120]: I0122 12:14:26.999416 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"267f7596e532eb745728638b48080b207a404ea82edb46b16da5fa5634680e48"} Jan 22 12:14:32 crc kubenswrapper[5120]: I0122 12:14:32.046078 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"6b5ede4cc3e631a410cc904199c1aad8fe648776f622956eebb0434b7ec3fd11"} Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.494220 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-4xz7b"] Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.558632 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-4xz7b"] Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.558872 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.684310 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxmzc\" (UniqueName: \"kubernetes.io/projected/cb40028b-f955-4b75-b559-a1c4ec5c9256-kube-api-access-rxmzc\") pod \"default-snmp-webhook-694dc457d5-4xz7b\" (UID: \"cb40028b-f955-4b75-b559-a1c4ec5c9256\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.786075 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rxmzc\" (UniqueName: \"kubernetes.io/projected/cb40028b-f955-4b75-b559-a1c4ec5c9256-kube-api-access-rxmzc\") pod \"default-snmp-webhook-694dc457d5-4xz7b\" (UID: \"cb40028b-f955-4b75-b559-a1c4ec5c9256\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.814033 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxmzc\" (UniqueName: \"kubernetes.io/projected/cb40028b-f955-4b75-b559-a1c4ec5c9256-kube-api-access-rxmzc\") pod \"default-snmp-webhook-694dc457d5-4xz7b\" (UID: \"cb40028b-f955-4b75-b559-a1c4ec5c9256\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:35 crc kubenswrapper[5120]: I0122 12:14:35.889259 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" Jan 22 12:14:36 crc kubenswrapper[5120]: I0122 12:14:36.187561 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-4xz7b"] Jan 22 12:14:36 crc kubenswrapper[5120]: W0122 12:14:36.215219 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb40028b_f955_4b75_b559_a1c4ec5c9256.slice/crio-6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1 WatchSource:0}: Error finding container 6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1: Status 404 returned error can't find the container with id 6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1 Jan 22 12:14:37 crc kubenswrapper[5120]: I0122 12:14:37.091050 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" event={"ID":"cb40028b-f955-4b75-b559-a1c4ec5c9256","Type":"ContainerStarted","Data":"6846a61470def7ba51d45ab323cc5d3ff77328384f1daf7ea9d5f35c9d435fc1"} Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.874833 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.898972 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.899245 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904500 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904527 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-csp9t\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904556 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904505 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.904942 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.905095 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971646 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971711 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971809 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-web-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971850 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-volume\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971869 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-out\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971884 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswvr\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-kube-api-access-vswvr\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971908 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:38 crc kubenswrapper[5120]: I0122 12:14:38.971929 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-tls-assets\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.073367 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.073432 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.073509 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075589 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-web-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-volume\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075691 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-out\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075717 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vswvr\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-kube-api-access-vswvr\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.075783 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-tls-assets\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.076407 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.076559 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:39.576526656 +0000 UTC m=+1614.320474997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.084528 5120 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.084592 5120 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/993d84bf7be45a27faf02d688ca3124bd0e06ed43b7298b0f65b55e404201a0b/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.086561 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-web-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.095773 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-tls-assets\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.096080 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.098642 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswvr\" (UniqueName: \"kubernetes.io/projected/88fc8b5e-6a79-414c-8a72-7447f8db3056-kube-api-access-vswvr\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.099094 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-volume\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.103308 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.104257 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/88fc8b5e-6a79-414c-8a72-7447f8db3056-config-out\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.110857 5120 generic.go:358] "Generic (PLEG): container finished" podID="af3a73d7-3578-4530-9916-0c3613d55591" containerID="6b5ede4cc3e631a410cc904199c1aad8fe648776f622956eebb0434b7ec3fd11" exitCode=0 Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.110912 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerDied","Data":"6b5ede4cc3e631a410cc904199c1aad8fe648776f622956eebb0434b7ec3fd11"} Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.123170 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0659fa00-6f2f-4e8b-a324-2daff3f775a1\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: I0122 12:14:39.584864 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.585498 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:39 crc kubenswrapper[5120]: E0122 12:14:39.585615 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:40.585589558 +0000 UTC m=+1615.329538119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:40 crc kubenswrapper[5120]: I0122 12:14:40.603592 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:40 crc kubenswrapper[5120]: E0122 12:14:40.603939 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:40 crc kubenswrapper[5120]: E0122 12:14:40.604568 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:42.604531404 +0000 UTC m=+1617.348479755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:42 crc kubenswrapper[5120]: I0122 12:14:42.642393 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:42 crc kubenswrapper[5120]: E0122 12:14:42.643908 5120 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:42 crc kubenswrapper[5120]: E0122 12:14:42.644160 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls podName:88fc8b5e-6a79-414c-8a72-7447f8db3056 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:46.644135537 +0000 UTC m=+1621.388083888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "88fc8b5e-6a79-414c-8a72-7447f8db3056") : secret "default-alertmanager-proxy-tls" not found Jan 22 12:14:46 crc kubenswrapper[5120]: I0122 12:14:46.729094 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:46 crc kubenswrapper[5120]: I0122 12:14:46.738283 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/88fc8b5e-6a79-414c-8a72-7447f8db3056-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"88fc8b5e-6a79-414c-8a72-7447f8db3056\") " pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:47 crc kubenswrapper[5120]: I0122 12:14:47.023603 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-csp9t\"" Jan 22 12:14:47 crc kubenswrapper[5120]: I0122 12:14:47.031661 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 22 12:14:47 crc kubenswrapper[5120]: I0122 12:14:47.382539 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 22 12:14:47 crc kubenswrapper[5120]: W0122 12:14:47.389616 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88fc8b5e_6a79_414c_8a72_7447f8db3056.slice/crio-bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3 WatchSource:0}: Error finding container bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3: Status 404 returned error can't find the container with id bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3 Jan 22 12:14:48 crc kubenswrapper[5120]: I0122 12:14:48.202816 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" event={"ID":"cb40028b-f955-4b75-b559-a1c4ec5c9256","Type":"ContainerStarted","Data":"0d1bcaf6d02cf6d43327afa2e95c3dbc92c421661b050060877f4244b7795329"} Jan 22 12:14:48 crc kubenswrapper[5120]: I0122 12:14:48.205036 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"bc00615fe587010e2bf03b7cc704c63f81763d12e755aa641318a9c23b19c0e3"} Jan 22 12:14:48 crc kubenswrapper[5120]: I0122 12:14:48.228518 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-4xz7b" podStartSLOduration=1.711983881 podStartE2EDuration="13.228494865s" podCreationTimestamp="2026-01-22 12:14:35 +0000 UTC" firstStartedPulling="2026-01-22 12:14:36.219311136 +0000 UTC m=+1610.963259497" lastFinishedPulling="2026-01-22 12:14:47.73582213 +0000 UTC m=+1622.479770481" observedRunningTime="2026-01-22 12:14:48.225232836 +0000 UTC m=+1622.969181187" watchObservedRunningTime="2026-01-22 12:14:48.228494865 +0000 UTC m=+1622.972443196" Jan 22 12:14:51 crc kubenswrapper[5120]: I0122 12:14:51.238702 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"30f1056fa40244c608160ccec3cf2c890121b8d494a730dc6f2221ba70fdffd2"} Jan 22 12:14:52 crc kubenswrapper[5120]: I0122 12:14:52.250536 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"b092ff5869d20f2aaf91483ecba3ef7e97ccf8e1e82ef6a6dbb4b90d4a22c378"} Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.281028 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"7e4ddf9c82997913cbafbab529e2d0b650a371fab6ea95043271935edefc4350"} Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.393121 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8"] Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.457122 5120 scope.go:117] "RemoveContainer" containerID="8c734d96e4b1f47996c023313a0ce278e60832df482833ed84ccfa06214e5cc6" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.457765 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.464754 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.465412 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.465693 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-xtq4h\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.465929 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480600 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8"] Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480740 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480820 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480846 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.480985 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvgp\" (UniqueName: \"kubernetes.io/projected/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-kube-api-access-9xvgp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582080 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvgp\" (UniqueName: \"kubernetes.io/projected/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-kube-api-access-9xvgp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582215 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582260 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.582842 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: E0122 12:14:55.583011 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:55 crc kubenswrapper[5120]: E0122 12:14:55.583104 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls podName:d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:56.083080616 +0000 UTC m=+1630.827028957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" (UID: "d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.584098 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.604490 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:55 crc kubenswrapper[5120]: I0122 12:14:55.610498 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvgp\" (UniqueName: \"kubernetes.io/projected/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-kube-api-access-9xvgp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:56 crc kubenswrapper[5120]: I0122 12:14:56.092061 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:56 crc kubenswrapper[5120]: E0122 12:14:56.092297 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:56 crc kubenswrapper[5120]: E0122 12:14:56.092377 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls podName:d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:57.092355984 +0000 UTC m=+1631.836304325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" (UID: "d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.110804 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.123914 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8\" (UID: \"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.340319 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" Jan 22 12:14:57 crc kubenswrapper[5120]: I0122 12:14:57.777364 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8"] Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.897345 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f"] Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.906414 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.908898 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f"] Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.909459 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.909872 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940677 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940744 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pwz\" (UniqueName: \"kubernetes.io/projected/e3b00756-b775-4a1c-90b1-852a7f1712b7-kube-api-access-h4pwz\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e3b00756-b775-4a1c-90b1-852a7f1712b7-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940880 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e3b00756-b775-4a1c-90b1-852a7f1712b7-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:58 crc kubenswrapper[5120]: I0122 12:14:58.940925 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.042618 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043086 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pwz\" (UniqueName: \"kubernetes.io/projected/e3b00756-b775-4a1c-90b1-852a7f1712b7-kube-api-access-h4pwz\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043135 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e3b00756-b775-4a1c-90b1-852a7f1712b7-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043212 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e3b00756-b775-4a1c-90b1-852a7f1712b7-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.043237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.043418 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.043525 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls podName:e3b00756-b775-4a1c-90b1-852a7f1712b7 nodeName:}" failed. No retries permitted until 2026-01-22 12:14:59.543500294 +0000 UTC m=+1634.287448635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" (UID: "e3b00756-b775-4a1c-90b1-852a7f1712b7") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.044659 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e3b00756-b775-4a1c-90b1-852a7f1712b7-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.045699 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e3b00756-b775-4a1c-90b1-852a7f1712b7-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.058908 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.060989 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pwz\" (UniqueName: \"kubernetes.io/projected/e3b00756-b775-4a1c-90b1-852a7f1712b7-kube-api-access-h4pwz\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.317552 5120 generic.go:358] "Generic (PLEG): container finished" podID="88fc8b5e-6a79-414c-8a72-7447f8db3056" containerID="30f1056fa40244c608160ccec3cf2c890121b8d494a730dc6f2221ba70fdffd2" exitCode=0 Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.317670 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerDied","Data":"30f1056fa40244c608160ccec3cf2c890121b8d494a730dc6f2221ba70fdffd2"} Jan 22 12:14:59 crc kubenswrapper[5120]: I0122 12:14:59.551242 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.551453 5120 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:14:59 crc kubenswrapper[5120]: E0122 12:14:59.551518 5120 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls podName:e3b00756-b775-4a1c-90b1-852a7f1712b7 nodeName:}" failed. No retries permitted until 2026-01-22 12:15:00.551500291 +0000 UTC m=+1635.295448622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" (UID: "e3b00756-b775-4a1c-90b1-852a7f1712b7") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 22 12:15:00 crc kubenswrapper[5120]: W0122 12:15:00.048320 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3caee9e_30bb_45fe_8ff9_2ef2a5f6d9a2.slice/crio-69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400 WatchSource:0}: Error finding container 69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400: Status 404 returned error can't find the container with id 69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400 Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.135254 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.572745 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.600688 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e3b00756-b775-4a1c-90b1-852a7f1712b7-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f\" (UID: \"e3b00756-b775-4a1c-90b1-852a7f1712b7\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.726130 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.993441 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.993495 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"69e58dab5eb9de816be7ffe58d5b9b3d5415e201026f572e02dd3a91f3643400"} Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.993680 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.997887 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:15:00 crc kubenswrapper[5120]: I0122 12:15:00.999012 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.081318 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.081401 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.081906 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.183283 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.183550 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.183981 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.184759 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.208341 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.214688 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"collect-profiles-29484735-6dctk\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:01 crc kubenswrapper[5120]: I0122 12:15:01.318574 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:02 crc kubenswrapper[5120]: I0122 12:15:02.392368 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f"] Jan 22 12:15:02 crc kubenswrapper[5120]: I0122 12:15:02.624793 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 12:15:02 crc kubenswrapper[5120]: I0122 12:15:02.789071 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x"] Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.041747 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x"] Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.042056 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.056583 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.056909 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.166877 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167023 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167134 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9836015c-341f-44a4-a0b1-2d155148b264-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167197 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wwvq\" (UniqueName: \"kubernetes.io/projected/9836015c-341f-44a4-a0b1-2d155148b264-kube-api-access-4wwvq\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.167273 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9836015c-341f-44a4-a0b1-2d155148b264-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.269490 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4wwvq\" (UniqueName: \"kubernetes.io/projected/9836015c-341f-44a4-a0b1-2d155148b264-kube-api-access-4wwvq\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.269980 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9836015c-341f-44a4-a0b1-2d155148b264-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270059 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270237 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270341 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9836015c-341f-44a4-a0b1-2d155148b264-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.270877 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9836015c-341f-44a4-a0b1-2d155148b264-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.271326 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9836015c-341f-44a4-a0b1-2d155148b264-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.277311 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.279413 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9836015c-341f-44a4-a0b1-2d155148b264-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.289759 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wwvq\" (UniqueName: \"kubernetes.io/projected/9836015c-341f-44a4-a0b1-2d155148b264-kube-api-access-4wwvq\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x\" (UID: \"9836015c-341f-44a4-a0b1-2d155148b264\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.373536 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" Jan 22 12:15:04 crc kubenswrapper[5120]: I0122 12:15:04.543745 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:15:04 crc kubenswrapper[5120]: W0122 12:15:04.554235 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5445dd15_192f_4528_92eb_f9507eb342c4.slice/crio-0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1 WatchSource:0}: Error finding container 0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1: Status 404 returned error can't find the container with id 0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1 Jan 22 12:15:05 crc kubenswrapper[5120]: I0122 12:15:05.369012 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" event={"ID":"5445dd15-192f-4528-92eb-f9507eb342c4","Type":"ContainerStarted","Data":"0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1"} Jan 22 12:15:05 crc kubenswrapper[5120]: I0122 12:15:05.370589 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"d34f142542f8f75929c7974229697eb860737c0228fc61a137ea5912ad5fe315"} Jan 22 12:15:05 crc kubenswrapper[5120]: I0122 12:15:05.443632 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x"] Jan 22 12:15:05 crc kubenswrapper[5120]: W0122 12:15:05.493403 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9836015c_341f_44a4_a0b1_2d155148b264.slice/crio-829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2 WatchSource:0}: Error finding container 829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2: Status 404 returned error can't find the container with id 829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2 Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.396927 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"829940e1b162f872468c8c8e4153fbb631007dbffbddd0d2bd3449be853859e2"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.412608 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"a6e0f981f486f38353addb18f494e615397c4a02727a7ff4e676ed27dc14fef0"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.431540 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"af3a73d7-3578-4530-9916-0c3613d55591","Type":"ContainerStarted","Data":"4531be95f01863e65e2e98ca683f9ec6225692957186aed34aa39591d4778820"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.452506 5120 generic.go:358] "Generic (PLEG): container finished" podID="5445dd15-192f-4528-92eb-f9507eb342c4" containerID="21cb135b3d3bfb01aa6f0319bccbb82d56dd92e0a9f8f4fb24aad8d3347005ef" exitCode=0 Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.453032 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" event={"ID":"5445dd15-192f-4528-92eb-f9507eb342c4","Type":"ContainerDied","Data":"21cb135b3d3bfb01aa6f0319bccbb82d56dd92e0a9f8f4fb24aad8d3347005ef"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.469623 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"18c931b48b2a489c0d341f03ada9db0324e8480268636d9729fb5334d4d8d860"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.470791 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.7351635 podStartE2EDuration="43.47077629s" podCreationTimestamp="2026-01-22 12:14:23 +0000 UTC" firstStartedPulling="2026-01-22 12:14:26.8200614 +0000 UTC m=+1601.564009741" lastFinishedPulling="2026-01-22 12:15:04.55567419 +0000 UTC m=+1639.299622531" observedRunningTime="2026-01-22 12:15:06.468494304 +0000 UTC m=+1641.212442655" watchObservedRunningTime="2026-01-22 12:15:06.47077629 +0000 UTC m=+1641.214724641" Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.478466 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"7e7dda325c715430af27761b2a39be29ee48203c1dad63762cfd24e7d9e23e0a"} Jan 22 12:15:06 crc kubenswrapper[5120]: I0122 12:15:06.545809 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:07 crc kubenswrapper[5120]: I0122 12:15:07.491180 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"b2a18027ba7ead04f02ba66effda4a3c4923f293ef04c4cba6c16d3c3826c19c"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.011288 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.143251 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") pod \"5445dd15-192f-4528-92eb-f9507eb342c4\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.143408 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") pod \"5445dd15-192f-4528-92eb-f9507eb342c4\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.143580 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") pod \"5445dd15-192f-4528-92eb-f9507eb342c4\" (UID: \"5445dd15-192f-4528-92eb-f9507eb342c4\") " Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.144549 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "5445dd15-192f-4528-92eb-f9507eb342c4" (UID: "5445dd15-192f-4528-92eb-f9507eb342c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.152423 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn" (OuterVolumeSpecName: "kube-api-access-6hffn") pod "5445dd15-192f-4528-92eb-f9507eb342c4" (UID: "5445dd15-192f-4528-92eb-f9507eb342c4"). InnerVolumeSpecName "kube-api-access-6hffn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.152487 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5445dd15-192f-4528-92eb-f9507eb342c4" (UID: "5445dd15-192f-4528-92eb-f9507eb342c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.246596 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5445dd15-192f-4528-92eb-f9507eb342c4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.246645 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hffn\" (UniqueName: \"kubernetes.io/projected/5445dd15-192f-4528-92eb-f9507eb342c4-kube-api-access-6hffn\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.246655 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445dd15-192f-4528-92eb-f9507eb342c4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.502780 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.505465 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.505477 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk" event={"ID":"5445dd15-192f-4528-92eb-f9507eb342c4","Type":"ContainerDied","Data":"0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.505542 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f766fa8f7b14734c4c130b3f9dfbafe7f6d28769f50bd5549f3b24e535173a1" Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.508184 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"f0d4869d9e180c8bbdd71016c35f08e28f84ea8f2fec345086b185d2bd76264f"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.510000 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b"} Jan 22 12:15:08 crc kubenswrapper[5120]: I0122 12:15:08.512907 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c"} Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.546236 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.591929 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.902997 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v"] Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.903735 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" containerName="collect-profiles" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.903759 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" containerName="collect-profiles" Jan 22 12:15:11 crc kubenswrapper[5120]: I0122 12:15:11.903891 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" containerName="collect-profiles" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.097916 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v"] Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.098491 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.102219 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.102608 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.153096 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.185862 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.185932 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4lxd\" (UniqueName: \"kubernetes.io/projected/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-kube-api-access-w4lxd\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.186015 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.186177 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288421 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288493 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4lxd\" (UniqueName: \"kubernetes.io/projected/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-kube-api-access-w4lxd\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288541 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.288597 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.289833 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.290083 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.300997 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.307064 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4lxd\" (UniqueName: \"kubernetes.io/projected/f2b79a21-0ce0-4563-9ea9-d7cd1e19652d-kube-api-access-w4lxd\") pod \"default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v\" (UID: \"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.418367 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" Jan 22 12:15:15 crc kubenswrapper[5120]: I0122 12:15:15.555575 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789"] Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.753357 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789"] Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.755484 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.763884 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834547 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5a872b8-950f-422a-9b1d-aaf761e5295c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834620 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5a872b8-950f-422a-9b1d-aaf761e5295c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834656 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klz99\" (UniqueName: \"kubernetes.io/projected/c5a872b8-950f-422a-9b1d-aaf761e5295c-kube-api-access-klz99\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.834747 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5a872b8-950f-422a-9b1d-aaf761e5295c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.936889 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5a872b8-950f-422a-9b1d-aaf761e5295c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937017 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5a872b8-950f-422a-9b1d-aaf761e5295c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937213 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klz99\" (UniqueName: \"kubernetes.io/projected/c5a872b8-950f-422a-9b1d-aaf761e5295c-kube-api-access-klz99\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937254 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5a872b8-950f-422a-9b1d-aaf761e5295c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.937383 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5a872b8-950f-422a-9b1d-aaf761e5295c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.938024 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c5a872b8-950f-422a-9b1d-aaf761e5295c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.959013 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c5a872b8-950f-422a-9b1d-aaf761e5295c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:16 crc kubenswrapper[5120]: I0122 12:15:16.971925 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klz99\" (UniqueName: \"kubernetes.io/projected/c5a872b8-950f-422a-9b1d-aaf761e5295c-kube-api-access-klz99\") pod \"default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789\" (UID: \"c5a872b8-950f-422a-9b1d-aaf761e5295c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:17 crc kubenswrapper[5120]: I0122 12:15:17.083025 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.618110 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789"] Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.643041 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"4a63e34c1ffbd75e3c65e9e084a7dd1b67521626f1f3f4fd7badd98b6697470f"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.646156 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"c5b43c7d4d175f20607714924f607b2cab0d2d7acd443bfc7099ba9e09ffec32"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.651038 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"88fc8b5e-6a79-414c-8a72-7447f8db3056","Type":"ContainerStarted","Data":"82b888440747885e73b0062a3decda119b5c046feb43a820a60bd17e9f0ceea8"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.653247 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"685a5c6249805ca15d8f5185f3b283536087b10b109e7643b1c22aeafe4b8bd1"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.655042 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"7a1e68c2b807cc5db771eb6f695c299dd8632926aa5a511b0837ee2d145343df"} Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.695798 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" podStartSLOduration=4.03966137 podStartE2EDuration="18.695772781s" podCreationTimestamp="2026-01-22 12:15:02 +0000 UTC" firstStartedPulling="2026-01-22 12:15:05.496675523 +0000 UTC m=+1640.240623864" lastFinishedPulling="2026-01-22 12:15:20.152786934 +0000 UTC m=+1654.896735275" observedRunningTime="2026-01-22 12:15:20.664187483 +0000 UTC m=+1655.408135824" watchObservedRunningTime="2026-01-22 12:15:20.695772781 +0000 UTC m=+1655.439721122" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.709696 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v"] Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.719745 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" podStartSLOduration=7.051880252 podStartE2EDuration="22.719707262s" podCreationTimestamp="2026-01-22 12:14:58 +0000 UTC" firstStartedPulling="2026-01-22 12:15:04.543902335 +0000 UTC m=+1639.287850676" lastFinishedPulling="2026-01-22 12:15:20.211729355 +0000 UTC m=+1654.955677686" observedRunningTime="2026-01-22 12:15:20.702387782 +0000 UTC m=+1655.446336123" watchObservedRunningTime="2026-01-22 12:15:20.719707262 +0000 UTC m=+1655.463655603" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.735510 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=24.892188595 podStartE2EDuration="43.735487405s" podCreationTimestamp="2026-01-22 12:14:37 +0000 UTC" firstStartedPulling="2026-01-22 12:14:59.319502927 +0000 UTC m=+1634.063451278" lastFinishedPulling="2026-01-22 12:15:18.162801747 +0000 UTC m=+1652.906750088" observedRunningTime="2026-01-22 12:15:20.728450554 +0000 UTC m=+1655.472398915" watchObservedRunningTime="2026-01-22 12:15:20.735487405 +0000 UTC m=+1655.479435736" Jan 22 12:15:20 crc kubenswrapper[5120]: I0122 12:15:20.761568 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" podStartSLOduration=5.615158932 podStartE2EDuration="25.761544588s" podCreationTimestamp="2026-01-22 12:14:55 +0000 UTC" firstStartedPulling="2026-01-22 12:15:00.049880774 +0000 UTC m=+1634.793829105" lastFinishedPulling="2026-01-22 12:15:20.19626642 +0000 UTC m=+1654.940214761" observedRunningTime="2026-01-22 12:15:20.757193463 +0000 UTC m=+1655.501141814" watchObservedRunningTime="2026-01-22 12:15:20.761544588 +0000 UTC m=+1655.505492929" Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.665064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"987f3eb973e91c81504a3418c1f8a80a647f84502e1f5cae44699d863ab161f1"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.665418 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.665437 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"dc07db03994e16c1a351f1718a043a8e78984fa524399e878a4d263e3c4c812c"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.668801 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"c92b7f18e3f80496e34770f60f788dc96266c13d766c754380cb51da51e2f377"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.668850 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2"} Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.706070 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" podStartSLOduration=10.24052666 podStartE2EDuration="10.706049015s" podCreationTimestamp="2026-01-22 12:15:11 +0000 UTC" firstStartedPulling="2026-01-22 12:15:20.707162747 +0000 UTC m=+1655.451111088" lastFinishedPulling="2026-01-22 12:15:21.172685102 +0000 UTC m=+1655.916633443" observedRunningTime="2026-01-22 12:15:21.687384702 +0000 UTC m=+1656.431333043" watchObservedRunningTime="2026-01-22 12:15:21.706049015 +0000 UTC m=+1656.449997356" Jan 22 12:15:21 crc kubenswrapper[5120]: I0122 12:15:21.708195 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" podStartSLOduration=6.24051379 podStartE2EDuration="6.708187818s" podCreationTimestamp="2026-01-22 12:15:15 +0000 UTC" firstStartedPulling="2026-01-22 12:15:20.6232738 +0000 UTC m=+1655.367222141" lastFinishedPulling="2026-01-22 12:15:21.090947828 +0000 UTC m=+1655.834896169" observedRunningTime="2026-01-22 12:15:21.702370777 +0000 UTC m=+1656.446319128" watchObservedRunningTime="2026-01-22 12:15:21.708187818 +0000 UTC m=+1656.452136159" Jan 22 12:15:28 crc kubenswrapper[5120]: I0122 12:15:28.616824 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:15:28 crc kubenswrapper[5120]: I0122 12:15:28.617797 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" containerID="cri-o://0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" gracePeriod=30 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.123352 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.162374 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-48w6f"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.163292 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.163321 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.163479 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerName="default-interconnect" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.168643 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.182581 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-48w6f"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.235127 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.235244 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.235469 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236256 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236344 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236402 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236426 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236506 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") pod \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\" (UID: \"f4812e83-6f17-4bad-8aaa-1521eb0b590f\") " Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.236826 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238162 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-config\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238338 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-users\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238374 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238431 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238617 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4pt\" (UniqueName: \"kubernetes.io/projected/a388a8ad-2606-4be5-9640-e8b11efa3daa-kube-api-access-tf4pt\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238706 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.238771 5120 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.244298 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.244331 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.244382 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.246703 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.247007 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5" (OuterVolumeSpecName: "kube-api-access-jmtv5") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "kube-api-access-jmtv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.266853 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "f4812e83-6f17-4bad-8aaa-1521eb0b590f" (UID: "f4812e83-6f17-4bad-8aaa-1521eb0b590f"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.339887 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4pt\" (UniqueName: \"kubernetes.io/projected/a388a8ad-2606-4be5-9640-e8b11efa3daa-kube-api-access-tf4pt\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.339987 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340399 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340450 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-config\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340501 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-users\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340533 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340584 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340684 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340709 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340727 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jmtv5\" (UniqueName: \"kubernetes.io/projected/f4812e83-6f17-4bad-8aaa-1521eb0b590f-kube-api-access-jmtv5\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340740 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340753 5120 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.340770 5120 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f4812e83-6f17-4bad-8aaa-1521eb0b590f-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.342428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-config\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347026 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347145 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347395 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-sasl-users\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347654 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.347745 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a388a8ad-2606-4be5-9640-e8b11efa3daa-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.359934 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4pt\" (UniqueName: \"kubernetes.io/projected/a388a8ad-2606-4be5-9640-e8b11efa3daa-kube-api-access-tf4pt\") pod \"default-interconnect-55bf8d5cb-48w6f\" (UID: \"a388a8ad-2606-4be5-9640-e8b11efa3daa\") " pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.485917 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.756149 5120 generic.go:358] "Generic (PLEG): container finished" podID="f2b79a21-0ce0-4563-9ea9-d7cd1e19652d" containerID="9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.756801 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerDied","Data":"9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.757633 5120 scope.go:117] "RemoveContainer" containerID="9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.766461 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-48w6f"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.777542 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2" containerID="c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.777780 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerDied","Data":"c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.778494 5120 scope.go:117] "RemoveContainer" containerID="c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.785969 5120 generic.go:358] "Generic (PLEG): container finished" podID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786047 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerDied","Data":"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786076 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" event={"ID":"f4812e83-6f17-4bad-8aaa-1521eb0b590f","Type":"ContainerDied","Data":"11628814cb12f23bc6c37dd57728341ba4c21021b5a9ed812a9f0c32aac8439a"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786095 5120 scope.go:117] "RemoveContainer" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.786270 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zgrdr" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.800758 5120 generic.go:358] "Generic (PLEG): container finished" podID="c5a872b8-950f-422a-9b1d-aaf761e5295c" containerID="d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.801364 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerDied","Data":"d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.802324 5120 scope.go:117] "RemoveContainer" containerID="d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.809894 5120 generic.go:358] "Generic (PLEG): container finished" podID="9836015c-341f-44a4-a0b1-2d155148b264" containerID="5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.810014 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerDied","Data":"5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.810660 5120 scope.go:117] "RemoveContainer" containerID="5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.819300 5120 generic.go:358] "Generic (PLEG): container finished" podID="e3b00756-b775-4a1c-90b1-852a7f1712b7" containerID="e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88" exitCode=0 Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.819626 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerDied","Data":"e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88"} Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.820366 5120 scope.go:117] "RemoveContainer" containerID="e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.878216 5120 scope.go:117] "RemoveContainer" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" Jan 22 12:15:29 crc kubenswrapper[5120]: E0122 12:15:29.895158 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3\": container with ID starting with 0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3 not found: ID does not exist" containerID="0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.895212 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3"} err="failed to get container status \"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3\": rpc error: code = NotFound desc = could not find container \"0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3\": container with ID starting with 0d4764f8cb2010a2330da137cf47631a4f97251072e950235bdfec5d58620ae3 not found: ID does not exist" Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.929260 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:15:29 crc kubenswrapper[5120]: I0122 12:15:29.937618 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zgrdr"] Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.828075 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.833882 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.838028 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.840287 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" event={"ID":"a388a8ad-2606-4be5-9640-e8b11efa3daa","Type":"ContainerStarted","Data":"7dc973f07cf99aa6d3dc92d6eeaff63f2b111736c9320e7d4c19807b7015e888"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.840344 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" event={"ID":"a388a8ad-2606-4be5-9640-e8b11efa3daa","Type":"ContainerStarted","Data":"0f911a0b193c43478a45ab1d7dd2f2abdc46d8a07b4fab83c46b1df9c92fd318"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.844182 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.850474 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2"} Jan 22 12:15:30 crc kubenswrapper[5120]: I0122 12:15:30.872991 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-48w6f" podStartSLOduration=2.8729696689999997 podStartE2EDuration="2.872969669s" podCreationTimestamp="2026-01-22 12:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:15:30.867519888 +0000 UTC m=+1665.611468229" watchObservedRunningTime="2026-01-22 12:15:30.872969669 +0000 UTC m=+1665.616918010" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.589399 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4812e83-6f17-4bad-8aaa-1521eb0b590f" path="/var/lib/kubelet/pods/f4812e83-6f17-4bad-8aaa-1521eb0b590f/volumes" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.861229 5120 generic.go:358] "Generic (PLEG): container finished" podID="f2b79a21-0ce0-4563-9ea9-d7cd1e19652d" containerID="c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.861292 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerDied","Data":"c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.862306 5120 scope.go:117] "RemoveContainer" containerID="9ec1c38b8bad47a50b0391be1aaf44b110a525113a7cbaedcbf781e01d53c413" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.863049 5120 scope.go:117] "RemoveContainer" containerID="c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.863475 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_service-telemetry(f2b79a21-0ce0-4563-9ea9-d7cd1e19652d)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" podUID="f2b79a21-0ce0-4563-9ea9-d7cd1e19652d" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.864893 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2" containerID="97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.865110 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerDied","Data":"97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.865593 5120 scope.go:117] "RemoveContainer" containerID="97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.865880 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_service-telemetry(d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" podUID="d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.873619 5120 generic.go:358] "Generic (PLEG): container finished" podID="c5a872b8-950f-422a-9b1d-aaf761e5295c" containerID="274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.873888 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerDied","Data":"274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.874457 5120 scope.go:117] "RemoveContainer" containerID="274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.874859 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_service-telemetry(c5a872b8-950f-422a-9b1d-aaf761e5295c)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" podUID="c5a872b8-950f-422a-9b1d-aaf761e5295c" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.878396 5120 generic.go:358] "Generic (PLEG): container finished" podID="9836015c-341f-44a4-a0b1-2d155148b264" containerID="40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.878518 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerDied","Data":"40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.879112 5120 scope.go:117] "RemoveContainer" containerID="40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.879432 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_service-telemetry(9836015c-341f-44a4-a0b1-2d155148b264)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" podUID="9836015c-341f-44a4-a0b1-2d155148b264" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.881162 5120 generic.go:358] "Generic (PLEG): container finished" podID="e3b00756-b775-4a1c-90b1-852a7f1712b7" containerID="49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2" exitCode=0 Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.881828 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerDied","Data":"49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2"} Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.882105 5120 scope.go:117] "RemoveContainer" containerID="49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2" Jan 22 12:15:31 crc kubenswrapper[5120]: E0122 12:15:31.882306 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_service-telemetry(e3b00756-b775-4a1c-90b1-852a7f1712b7)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" podUID="e3b00756-b775-4a1c-90b1-852a7f1712b7" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.924523 5120 scope.go:117] "RemoveContainer" containerID="c6aeb92452de7be06be2214e00760877418176e7adeaa44cb0aebda9bf04c25b" Jan 22 12:15:31 crc kubenswrapper[5120]: I0122 12:15:31.994818 5120 scope.go:117] "RemoveContainer" containerID="d70e84c4805e3abc4485f6e976fabc66057dc851ff94b34902b82b744cc891a2" Jan 22 12:15:32 crc kubenswrapper[5120]: I0122 12:15:32.074926 5120 scope.go:117] "RemoveContainer" containerID="5f16f1d46c062cbc552acd91ad7e0a4b3cab4d650f43db408caa84b76811fc0c" Jan 22 12:15:32 crc kubenswrapper[5120]: I0122 12:15:32.124940 5120 scope.go:117] "RemoveContainer" containerID="e1d8c8ed095be6345cb8d0a5f794ec8f028217d079f94e974827a0b29f123d88" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.938325 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.948474 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.950673 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.951112 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 22 12:15:34 crc kubenswrapper[5120]: I0122 12:15:34.952319 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.054919 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/17ccb7ef-92f9-4fe2-aeac-92f706339496-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.055044 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7kx\" (UniqueName: \"kubernetes.io/projected/17ccb7ef-92f9-4fe2-aeac-92f706339496-kube-api-access-5r7kx\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.055132 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/17ccb7ef-92f9-4fe2-aeac-92f706339496-qdr-test-config\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.156174 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5r7kx\" (UniqueName: \"kubernetes.io/projected/17ccb7ef-92f9-4fe2-aeac-92f706339496-kube-api-access-5r7kx\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.156314 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/17ccb7ef-92f9-4fe2-aeac-92f706339496-qdr-test-config\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.156390 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/17ccb7ef-92f9-4fe2-aeac-92f706339496-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.157737 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/17ccb7ef-92f9-4fe2-aeac-92f706339496-qdr-test-config\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.163647 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/17ccb7ef-92f9-4fe2-aeac-92f706339496-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.176452 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r7kx\" (UniqueName: \"kubernetes.io/projected/17ccb7ef-92f9-4fe2-aeac-92f706339496-kube-api-access-5r7kx\") pod \"qdr-test\" (UID: \"17ccb7ef-92f9-4fe2-aeac-92f706339496\") " pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.272256 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.801435 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 22 12:15:35 crc kubenswrapper[5120]: I0122 12:15:35.923282 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"17ccb7ef-92f9-4fe2-aeac-92f706339496","Type":"ContainerStarted","Data":"72ffffa56ff8da69c001f511b2eb49b0a1f748a037ea79c351dad8dcb565b92e"} Jan 22 12:15:42 crc kubenswrapper[5120]: I0122 12:15:42.572190 5120 scope.go:117] "RemoveContainer" containerID="274248b9acd2538341b1a08623b92f7d58fd2d4d7ad0c4de846150867a678587" Jan 22 12:15:45 crc kubenswrapper[5120]: I0122 12:15:45.586251 5120 scope.go:117] "RemoveContainer" containerID="97462354de95960163932658d34175f140ce0daa6f23dd586725b5b6345bd569" Jan 22 12:15:46 crc kubenswrapper[5120]: I0122 12:15:46.581515 5120 scope.go:117] "RemoveContainer" containerID="c4db07130a5f11ed8e59402d43134e380a3362e5f92b473819c2fade40ee3899" Jan 22 12:15:46 crc kubenswrapper[5120]: I0122 12:15:46.582091 5120 scope.go:117] "RemoveContainer" containerID="49ed05819c018c26b9738d4d36e4c05cabc3e62405b65c0e76d324c5160711d2" Jan 22 12:15:47 crc kubenswrapper[5120]: I0122 12:15:47.572350 5120 scope.go:117] "RemoveContainer" containerID="40db0486309d939b49a3600e70216e8c57a040f1db69bfa2c84a534e01d3a271" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.074586 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x" event={"ID":"9836015c-341f-44a4-a0b1-2d155148b264","Type":"ContainerStarted","Data":"febdf5e56d12592ccae563973e2be0d9b9fd0ff7ba6788f899660dbde3c33155"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.079402 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f" event={"ID":"e3b00756-b775-4a1c-90b1-852a7f1712b7","Type":"ContainerStarted","Data":"ea31889c07661f32adbeaba68512715bc8f03db1e4ec070763ff42266e4261c8"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.083563 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"17ccb7ef-92f9-4fe2-aeac-92f706339496","Type":"ContainerStarted","Data":"1d80ecf5c4bdd4bc9b734189011d52d2dc1dd42b636073d41a7d36349d60d91a"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.089936 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v" event={"ID":"f2b79a21-0ce0-4563-9ea9-d7cd1e19652d","Type":"ContainerStarted","Data":"cf3cb73f1c400392061d7fdb348ff1ec801b22ba473c1785e9149b18c55b0c85"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.095826 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8" event={"ID":"d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2","Type":"ContainerStarted","Data":"54228982152cf02430d6dc29001cb0c614b6034eb69f8ed3d66b0fa9ae786746"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.098485 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789" event={"ID":"c5a872b8-950f-422a-9b1d-aaf761e5295c","Type":"ContainerStarted","Data":"eed3dfd2ff2681e3e2d219a0190a0b4a36ee15be4ad07e2da3a09e5042bacb0b"} Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.261082 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.893065712 podStartE2EDuration="14.26105986s" podCreationTimestamp="2026-01-22 12:15:34 +0000 UTC" firstStartedPulling="2026-01-22 12:15:35.816316832 +0000 UTC m=+1670.560265173" lastFinishedPulling="2026-01-22 12:15:47.18431098 +0000 UTC m=+1681.928259321" observedRunningTime="2026-01-22 12:15:48.236307539 +0000 UTC m=+1682.980255880" watchObservedRunningTime="2026-01-22 12:15:48.26105986 +0000 UTC m=+1683.005008201" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.594980 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-xm4v9"] Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.729013 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.728801 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-xm4v9"] Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.735971 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.736198 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.736362 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.736550 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.737109 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.737176 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.795964 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796106 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796282 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796426 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796503 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796569 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.796639 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.897988 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898059 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898094 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898119 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.898140 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899256 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899332 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899354 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899638 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899494 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899708 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899473 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.899779 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:48 crc kubenswrapper[5120]: I0122 12:15:48.922789 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"stf-smoketest-smoke1-xm4v9\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.033821 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.040446 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.042986 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.056294 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.105640 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"curl\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.207614 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"curl\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.256170 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"curl\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.372788 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.586170 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-xm4v9"] Jan 22 12:15:49 crc kubenswrapper[5120]: W0122 12:15:49.592150 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f7c177d_a587_4302_b084_7d4c780bf78b.slice/crio-b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de WatchSource:0}: Error finding container b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de: Status 404 returned error can't find the container with id b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de Jan 22 12:15:49 crc kubenswrapper[5120]: I0122 12:15:49.644401 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 22 12:15:49 crc kubenswrapper[5120]: W0122 12:15:49.657590 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25098451_fba7_406a_8973_0df221d16bda.slice/crio-429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c WatchSource:0}: Error finding container 429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c: Status 404 returned error can't find the container with id 429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c Jan 22 12:15:50 crc kubenswrapper[5120]: I0122 12:15:50.135471 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerStarted","Data":"b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de"} Jan 22 12:15:50 crc kubenswrapper[5120]: I0122 12:15:50.137297 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25098451-fba7-406a-8973-0df221d16bda","Type":"ContainerStarted","Data":"429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c"} Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.136615 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.178915 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.179106 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.437555 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.437911 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.438198 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.447103 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"auto-csr-approver-29484736-5pvc5\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.549298 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"auto-csr-approver-29484736-5pvc5\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.572466 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"auto-csr-approver-29484736-5pvc5\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:00 crc kubenswrapper[5120]: I0122 12:16:00.763909 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:01 crc kubenswrapper[5120]: I0122 12:16:01.972637 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:16:01 crc kubenswrapper[5120]: I0122 12:16:01.973191 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:16:03 crc kubenswrapper[5120]: I0122 12:16:03.466493 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:16:03 crc kubenswrapper[5120]: I0122 12:16:03.488556 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" event={"ID":"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca","Type":"ContainerStarted","Data":"564150a27c4da732632e24273e78390b7e240478afed6debebabd4375c23bfda"} Jan 22 12:16:04 crc kubenswrapper[5120]: I0122 12:16:04.503246 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerStarted","Data":"ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f"} Jan 22 12:16:04 crc kubenswrapper[5120]: I0122 12:16:04.512705 5120 generic.go:358] "Generic (PLEG): container finished" podID="25098451-fba7-406a-8973-0df221d16bda" containerID="ce66866e870ec5d7fb68c32efb8bbeee1c3238639c4a8df944eb20172469a38e" exitCode=0 Jan 22 12:16:04 crc kubenswrapper[5120]: I0122 12:16:04.512763 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25098451-fba7-406a-8973-0df221d16bda","Type":"ContainerDied","Data":"ce66866e870ec5d7fb68c32efb8bbeee1c3238639c4a8df944eb20172469a38e"} Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.524668 5120 generic.go:358] "Generic (PLEG): container finished" podID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerID="7dd5e09283dddb7bf8d7833ea438fcac480d32b32def3f4fc53d049422374e23" exitCode=0 Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.525185 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" event={"ID":"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca","Type":"ContainerDied","Data":"7dd5e09283dddb7bf8d7833ea438fcac480d32b32def3f4fc53d049422374e23"} Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.806421 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.961320 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") pod \"25098451-fba7-406a-8973-0df221d16bda\" (UID: \"25098451-fba7-406a-8973-0df221d16bda\") " Jan 22 12:16:05 crc kubenswrapper[5120]: I0122 12:16:05.969859 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx" (OuterVolumeSpecName: "kube-api-access-t8skx") pod "25098451-fba7-406a-8973-0df221d16bda" (UID: "25098451-fba7-406a-8973-0df221d16bda"). InnerVolumeSpecName "kube-api-access-t8skx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.003552 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_25098451-fba7-406a-8973-0df221d16bda/curl/0.log" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.065343 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t8skx\" (UniqueName: \"kubernetes.io/projected/25098451-fba7-406a-8973-0df221d16bda-kube-api-access-t8skx\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.309384 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.535157 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.535233 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"25098451-fba7-406a-8973-0df221d16bda","Type":"ContainerDied","Data":"429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c"} Jan 22 12:16:06 crc kubenswrapper[5120]: I0122 12:16:06.535304 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429c47a6e82370d8ab8953c5bb19669065342da0a2be7f47af0f06895d1c426c" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.527057 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.572714 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" event={"ID":"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca","Type":"ContainerDied","Data":"564150a27c4da732632e24273e78390b7e240478afed6debebabd4375c23bfda"} Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.572792 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564150a27c4da732632e24273e78390b7e240478afed6debebabd4375c23bfda" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.572933 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484736-5pvc5" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.609564 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") pod \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\" (UID: \"5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca\") " Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.620094 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl" (OuterVolumeSpecName: "kube-api-access-mn8dl") pod "5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" (UID: "5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca"). InnerVolumeSpecName "kube-api-access-mn8dl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:16:08 crc kubenswrapper[5120]: I0122 12:16:08.711402 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mn8dl\" (UniqueName: \"kubernetes.io/projected/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca-kube-api-access-mn8dl\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:09 crc kubenswrapper[5120]: I0122 12:16:09.605946 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:16:09 crc kubenswrapper[5120]: I0122 12:16:09.613339 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484730-z4qj9"] Jan 22 12:16:11 crc kubenswrapper[5120]: I0122 12:16:11.586678 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fa02fb-d5af-46f8-b19a-9af5fd7e5353" path="/var/lib/kubelet/pods/86fa02fb-d5af-46f8-b19a-9af5fd7e5353/volumes" Jan 22 12:16:11 crc kubenswrapper[5120]: I0122 12:16:11.604579 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerStarted","Data":"98ccc2b8fec36bdedebf6260b3d6f179de9f39ff33596f42b42951ef0d56edb8"} Jan 22 12:16:11 crc kubenswrapper[5120]: I0122 12:16:11.626355 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" podStartSLOduration=2.740024253 podStartE2EDuration="23.626333908s" podCreationTimestamp="2026-01-22 12:15:48 +0000 UTC" firstStartedPulling="2026-01-22 12:15:49.600284224 +0000 UTC m=+1684.344232565" lastFinishedPulling="2026-01-22 12:16:10.486593869 +0000 UTC m=+1705.230542220" observedRunningTime="2026-01-22 12:16:11.62597769 +0000 UTC m=+1706.369926051" watchObservedRunningTime="2026-01-22 12:16:11.626333908 +0000 UTC m=+1706.370282249" Jan 22 12:16:31 crc kubenswrapper[5120]: I0122 12:16:31.972760 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:16:31 crc kubenswrapper[5120]: I0122 12:16:31.973561 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:16:36 crc kubenswrapper[5120]: I0122 12:16:36.505455 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:16:40 crc kubenswrapper[5120]: I0122 12:16:40.858423 5120 generic.go:358] "Generic (PLEG): container finished" podID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerID="ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f" exitCode=0 Jan 22 12:16:40 crc kubenswrapper[5120]: I0122 12:16:40.858817 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerDied","Data":"ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f"} Jan 22 12:16:40 crc kubenswrapper[5120]: I0122 12:16:40.859561 5120 scope.go:117] "RemoveContainer" containerID="ff38eff32aa3041858a79877ab066a7ce92fc6dc6d8cf6fccb024c7ec615617f" Jan 22 12:16:42 crc kubenswrapper[5120]: I0122 12:16:42.894040 5120 generic.go:358] "Generic (PLEG): container finished" podID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerID="98ccc2b8fec36bdedebf6260b3d6f179de9f39ff33596f42b42951ef0d56edb8" exitCode=0 Jan 22 12:16:42 crc kubenswrapper[5120]: I0122 12:16:42.894115 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerDied","Data":"98ccc2b8fec36bdedebf6260b3d6f179de9f39ff33596f42b42951ef0d56edb8"} Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.175314 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.246235 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.246396 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.246471 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248219 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248337 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248643 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.248932 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") pod \"1f7c177d-a587-4302-b084-7d4c780bf78b\" (UID: \"1f7c177d-a587-4302-b084-7d4c780bf78b\") " Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.254337 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x" (OuterVolumeSpecName: "kube-api-access-8xw8x") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "kube-api-access-8xw8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.266830 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.267568 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.267913 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.269421 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.274354 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.274651 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "1f7c177d-a587-4302-b084-7d4c780bf78b" (UID: "1f7c177d-a587-4302-b084-7d4c780bf78b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351333 5120 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351372 5120 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351385 5120 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351396 5120 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351405 5120 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351414 5120 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/1f7c177d-a587-4302-b084-7d4c780bf78b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.351423 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xw8x\" (UniqueName: \"kubernetes.io/projected/1f7c177d-a587-4302-b084-7d4c780bf78b-kube-api-access-8xw8x\") on node \"crc\" DevicePath \"\"" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.917342 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.918265 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-xm4v9" event={"ID":"1f7c177d-a587-4302-b084-7d4c780bf78b","Type":"ContainerDied","Data":"b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de"} Jan 22 12:16:44 crc kubenswrapper[5120]: I0122 12:16:44.918381 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b64f1211c610b1cf9479ced1160965d605792e89e87573ef200bc541bf17a2de" Jan 22 12:16:46 crc kubenswrapper[5120]: I0122 12:16:46.317210 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-collectd/0.log" Jan 22 12:16:46 crc kubenswrapper[5120]: I0122 12:16:46.640447 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-ceilometer/0.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.004175 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-48w6f_a388a8ad-2606-4be5-9640-e8b11efa3daa/default-interconnect/0.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.329996 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/bridge/2.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.676379 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/sg-core/0.log" Jan 22 12:16:47 crc kubenswrapper[5120]: I0122 12:16:47.987358 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/bridge/2.log" Jan 22 12:16:48 crc kubenswrapper[5120]: I0122 12:16:48.305485 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/sg-core/0.log" Jan 22 12:16:48 crc kubenswrapper[5120]: I0122 12:16:48.630302 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/bridge/2.log" Jan 22 12:16:48 crc kubenswrapper[5120]: I0122 12:16:48.973360 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/sg-core/0.log" Jan 22 12:16:49 crc kubenswrapper[5120]: I0122 12:16:49.318107 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/bridge/2.log" Jan 22 12:16:49 crc kubenswrapper[5120]: I0122 12:16:49.612010 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/sg-core/0.log" Jan 22 12:16:49 crc kubenswrapper[5120]: I0122 12:16:49.995239 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/bridge/2.log" Jan 22 12:16:50 crc kubenswrapper[5120]: I0122 12:16:50.343934 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/sg-core/0.log" Jan 22 12:16:53 crc kubenswrapper[5120]: I0122 12:16:53.990296 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-84c66d88-wp5jc_8f9d3100-17a5-4c92-bf93-17c74efea49f/operator/0.log" Jan 22 12:16:54 crc kubenswrapper[5120]: I0122 12:16:54.278084 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/prometheus/0.log" Jan 22 12:16:54 crc kubenswrapper[5120]: I0122 12:16:54.576695 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elasticsearch/0.log" Jan 22 12:16:54 crc kubenswrapper[5120]: I0122 12:16:54.900366 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:16:55 crc kubenswrapper[5120]: I0122 12:16:55.199546 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/alertmanager/0.log" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.395467 5120 scope.go:117] "RemoveContainer" containerID="e435702e7c696c62fc24675d08a9198377bd5a0c61f1adb503efe9265edbf5bd" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.972523 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.973090 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.973295 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.974334 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:17:01 crc kubenswrapper[5120]: I0122 12:17:01.974526 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" gracePeriod=600 Jan 22 12:17:02 crc kubenswrapper[5120]: E0122 12:17:02.728578 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.102331 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" exitCode=0 Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.102382 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d"} Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.102480 5120 scope.go:117] "RemoveContainer" containerID="719354116d7ea0573a90aa1ae4bf7fd19ddeee3f2ea6145219b58e58618f132f" Jan 22 12:17:03 crc kubenswrapper[5120]: I0122 12:17:03.103570 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:03 crc kubenswrapper[5120]: E0122 12:17:03.104228 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:11 crc kubenswrapper[5120]: I0122 12:17:11.158132 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-69f575f8bc-9msdn_71c6d75c-6634-4017-92b9-487a57bcc47b/operator/0.log" Jan 22 12:17:15 crc kubenswrapper[5120]: I0122 12:17:15.578518 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:15 crc kubenswrapper[5120]: E0122 12:17:15.579321 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:15 crc kubenswrapper[5120]: I0122 12:17:15.582693 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-84c66d88-wp5jc_8f9d3100-17a5-4c92-bf93-17c74efea49f/operator/0.log" Jan 22 12:17:15 crc kubenswrapper[5120]: I0122 12:17:15.932721 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_17ccb7ef-92f9-4fe2-aeac-92f706339496/qdr/0.log" Jan 22 12:17:26 crc kubenswrapper[5120]: I0122 12:17:26.573504 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:26 crc kubenswrapper[5120]: E0122 12:17:26.574894 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.576651 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:41 crc kubenswrapper[5120]: E0122 12:17:41.578995 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.668631 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2xb8g/must-gather-fcsxx"] Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669758 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-ceilometer" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669839 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-ceilometer" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669910 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25098451-fba7-406a-8973-0df221d16bda" containerName="curl" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.669982 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="25098451-fba7-406a-8973-0df221d16bda" containerName="curl" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670072 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-collectd" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670129 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-collectd" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670189 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerName="oc" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670244 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerName="oc" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670416 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" containerName="oc" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670477 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-ceilometer" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670541 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="1f7c177d-a587-4302-b084-7d4c780bf78b" containerName="smoketest-collectd" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.670597 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="25098451-fba7-406a-8973-0df221d16bda" containerName="curl" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.676634 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.681066 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-2xb8g\"/\"default-dockercfg-ldbdd\"" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.681612 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2xb8g\"/\"kube-root-ca.crt\"" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.681822 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-2xb8g\"/\"openshift-service-ca.crt\"" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.683193 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2xb8g/must-gather-fcsxx"] Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.746927 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkrlq\" (UniqueName: \"kubernetes.io/projected/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-kube-api-access-zkrlq\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.747026 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-must-gather-output\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.848104 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-must-gather-output\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.848316 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkrlq\" (UniqueName: \"kubernetes.io/projected/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-kube-api-access-zkrlq\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.848579 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-must-gather-output\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:41 crc kubenswrapper[5120]: I0122 12:17:41.879166 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkrlq\" (UniqueName: \"kubernetes.io/projected/01f5b3a1-c30b-4a70-9096-28a4e3d15a54-kube-api-access-zkrlq\") pod \"must-gather-fcsxx\" (UID: \"01f5b3a1-c30b-4a70-9096-28a4e3d15a54\") " pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:42 crc kubenswrapper[5120]: I0122 12:17:42.016823 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" Jan 22 12:17:42 crc kubenswrapper[5120]: I0122 12:17:42.461732 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2xb8g/must-gather-fcsxx"] Jan 22 12:17:43 crc kubenswrapper[5120]: I0122 12:17:43.483236 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" event={"ID":"01f5b3a1-c30b-4a70-9096-28a4e3d15a54","Type":"ContainerStarted","Data":"78631c107310d4abd1880128712672ef8b12b3bdf1600a786fa65b1af64baa60"} Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.332162 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.332230 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.345561 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.345562 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:17:51 crc kubenswrapper[5120]: I0122 12:17:51.564064 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" event={"ID":"01f5b3a1-c30b-4a70-9096-28a4e3d15a54","Type":"ContainerStarted","Data":"d386d5080c7fc835e26078b82ade2811a34f47c64b7bd93476027a8ab5c2517c"} Jan 22 12:17:52 crc kubenswrapper[5120]: I0122 12:17:52.573433 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" event={"ID":"01f5b3a1-c30b-4a70-9096-28a4e3d15a54","Type":"ContainerStarted","Data":"364c7bf6a6f388a3e4047bdae372a183325ae5db514b5ac5af7808cecc0fedc2"} Jan 22 12:17:52 crc kubenswrapper[5120]: I0122 12:17:52.592643 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2xb8g/must-gather-fcsxx" podStartSLOduration=2.7365270539999997 podStartE2EDuration="11.592620019s" podCreationTimestamp="2026-01-22 12:17:41 +0000 UTC" firstStartedPulling="2026-01-22 12:17:42.478623914 +0000 UTC m=+1797.222572295" lastFinishedPulling="2026-01-22 12:17:51.334716919 +0000 UTC m=+1806.078665260" observedRunningTime="2026-01-22 12:17:52.586412478 +0000 UTC m=+1807.330360809" watchObservedRunningTime="2026-01-22 12:17:52.592620019 +0000 UTC m=+1807.336568360" Jan 22 12:17:53 crc kubenswrapper[5120]: I0122 12:17:53.572912 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:17:53 crc kubenswrapper[5120]: E0122 12:17:53.573366 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.138430 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.156092 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.156226 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.172747 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.173057 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.175398 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.193624 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"auto-csr-approver-29484738-tfzpk\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.295462 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"auto-csr-approver-29484738-tfzpk\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.320645 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"auto-csr-approver-29484738-tfzpk\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.485162 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:00 crc kubenswrapper[5120]: I0122 12:18:00.926425 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:18:01 crc kubenswrapper[5120]: I0122 12:18:01.646332 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" event={"ID":"f97383a0-beb0-4ff9-a965-28e0e9b1addb","Type":"ContainerStarted","Data":"3a7380241ccb5fb61fbba947c8b0dabf45e969af7a227e081182d8a4ca70e18b"} Jan 22 12:18:02 crc kubenswrapper[5120]: I0122 12:18:02.656812 5120 generic.go:358] "Generic (PLEG): container finished" podID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerID="5130cc2c660ed67d488de9c861af0f840a6694cd424858313d97ed3425c416ca" exitCode=0 Jan 22 12:18:02 crc kubenswrapper[5120]: I0122 12:18:02.656951 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" event={"ID":"f97383a0-beb0-4ff9-a965-28e0e9b1addb","Type":"ContainerDied","Data":"5130cc2c660ed67d488de9c861af0f840a6694cd424858313d97ed3425c416ca"} Jan 22 12:18:03 crc kubenswrapper[5120]: I0122 12:18:03.931010 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.105046 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") pod \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\" (UID: \"f97383a0-beb0-4ff9-a965-28e0e9b1addb\") " Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.115274 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr" (OuterVolumeSpecName: "kube-api-access-rmjjr") pod "f97383a0-beb0-4ff9-a965-28e0e9b1addb" (UID: "f97383a0-beb0-4ff9-a965-28e0e9b1addb"). InnerVolumeSpecName "kube-api-access-rmjjr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.207992 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmjjr\" (UniqueName: \"kubernetes.io/projected/f97383a0-beb0-4ff9-a965-28e0e9b1addb-kube-api-access-rmjjr\") on node \"crc\" DevicePath \"\"" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.682495 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" event={"ID":"f97383a0-beb0-4ff9-a965-28e0e9b1addb","Type":"ContainerDied","Data":"3a7380241ccb5fb61fbba947c8b0dabf45e969af7a227e081182d8a4ca70e18b"} Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.682914 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7380241ccb5fb61fbba947c8b0dabf45e969af7a227e081182d8a4ca70e18b" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.682544 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484738-tfzpk" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.945974 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-fhxb8_3cc31b0e-b225-470f-870b-f89666eae47b/control-plane-machine-set-operator/0.log" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.974602 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/kube-rbac-proxy/0.log" Jan 22 12:18:04 crc kubenswrapper[5120]: I0122 12:18:04.988871 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/machine-api-operator/0.log" Jan 22 12:18:05 crc kubenswrapper[5120]: I0122 12:18:05.009195 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:18:05 crc kubenswrapper[5120]: I0122 12:18:05.016914 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484732-pmd7b"] Jan 22 12:18:05 crc kubenswrapper[5120]: I0122 12:18:05.581918 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2284d302-27de-4f84-9cd9-0b27dc76e987" path="/var/lib/kubelet/pods/2284d302-27de-4f84-9cd9-0b27dc76e987/volumes" Jan 22 12:18:07 crc kubenswrapper[5120]: I0122 12:18:07.572227 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:07 crc kubenswrapper[5120]: E0122 12:18:07.573084 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:10 crc kubenswrapper[5120]: I0122 12:18:10.432598 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-n6l95_56c64e8f-cd1a-468a-a526-ed7c1ff5ac88/cert-manager-controller/0.log" Jan 22 12:18:10 crc kubenswrapper[5120]: I0122 12:18:10.445236 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-qc2vc_abe35b4f-1ae8-4e82-8b22-5f2d8fe01445/cert-manager-cainjector/0.log" Jan 22 12:18:10 crc kubenswrapper[5120]: I0122 12:18:10.459830 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-r299r_fab5bde7-2cb3-4840-955e-6eec20d29b5d/cert-manager-webhook/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.445267 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kjb4b_6f74f225-731c-48b9-a98d-36a191b5ff41/prometheus-operator/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.462542 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb_2e68b911-b2b1-4a04-a86f-91742f22bad9/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.476145 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7_6924228f-579c-408a-8a40-b103b066446d/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.496502 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-s6759_da59fdd4-fe7a-4efd-b136-79a9b05d38b8/operator/0.log" Jan 22 12:18:16 crc kubenswrapper[5120]: I0122 12:18:16.508125 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-n9lhg_da376ee2-11ae-493e-9e4d-d8ac6fadfb53/perses-operator/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.387530 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz_5915ccea-14c1-48c1-8e09-9cc508bb150e/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.397696 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz_5915ccea-14c1-48c1-8e09-9cc508bb150e/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.433947 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a87lvz_5915ccea-14c1-48c1-8e09-9cc508bb150e/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.448192 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn_6451a1e2-e63d-4a21-bab9-c97f9b2c9236/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.457010 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn_6451a1e2-e63d-4a21-bab9-c97f9b2c9236/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.474032 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbnzxn_6451a1e2-e63d-4a21-bab9-c97f9b2c9236/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.490068 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6_6ae07b37-44a2-4e47-abb9-5587cb866c3b/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.511276 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6_6ae07b37-44a2-4e47-abb9-5587cb866c3b/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.518833 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e86dw6_6ae07b37-44a2-4e47-abb9-5587cb866c3b/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.533611 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b_04591ad2-b41c-420f-9328-a9ff515b4e1e/extract/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.540515 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b_04591ad2-b41c-420f-9328-a9ff515b4e1e/util/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.551493 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08qbn7b_04591ad2-b41c-420f-9328-a9ff515b4e1e/pull/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.571481 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:22 crc kubenswrapper[5120]: E0122 12:18:22.571716 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.777602 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7xvj9_90af06b6-8b8b-48f3-bfb2-541ef60610fa/registry-server/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.784794 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7xvj9_90af06b6-8b8b-48f3-bfb2-541ef60610fa/extract-utilities/0.log" Jan 22 12:18:22 crc kubenswrapper[5120]: I0122 12:18:22.791775 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7xvj9_90af06b6-8b8b-48f3-bfb2-541ef60610fa/extract-content/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.067894 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jck2s_3a14b1ee-af9d-4a1e-863f-c69c216c25d2/registry-server/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.073055 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jck2s_3a14b1ee-af9d-4a1e-863f-c69c216c25d2/extract-utilities/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.079997 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jck2s_3a14b1ee-af9d-4a1e-863f-c69c216c25d2/extract-content/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.094366 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-nzw8g_abdba773-b95f-4d73-bcb5-d36526f8e13d/marketplace-operator/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.373525 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-srj7k_65ded1b5-0551-47c3-b32f-646318c3055a/registry-server/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.380651 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-srj7k_65ded1b5-0551-47c3-b32f-646318c3055a/extract-utilities/0.log" Jan 22 12:18:23 crc kubenswrapper[5120]: I0122 12:18:23.391773 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-srj7k_65ded1b5-0551-47c3-b32f-646318c3055a/extract-content/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.191207 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kjb4b_6f74f225-731c-48b9-a98d-36a191b5ff41/prometheus-operator/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.208836 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb_2e68b911-b2b1-4a04-a86f-91742f22bad9/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.228210 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7_6924228f-579c-408a-8a40-b103b066446d/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.249601 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-s6759_da59fdd4-fe7a-4efd-b136-79a9b05d38b8/operator/0.log" Jan 22 12:18:28 crc kubenswrapper[5120]: I0122 12:18:28.263357 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-n9lhg_da376ee2-11ae-493e-9e4d-d8ac6fadfb53/perses-operator/0.log" Jan 22 12:18:37 crc kubenswrapper[5120]: I0122 12:18:37.572241 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:37 crc kubenswrapper[5120]: E0122 12:18:37.572670 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.061913 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kjb4b_6f74f225-731c-48b9-a98d-36a191b5ff41/prometheus-operator/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.076104 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-78zjb_2e68b911-b2b1-4a04-a86f-91742f22bad9/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.091395 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f6bdccb4-kw6h7_6924228f-579c-408a-8a40-b103b066446d/prometheus-operator-admission-webhook/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.117787 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-s6759_da59fdd4-fe7a-4efd-b136-79a9b05d38b8/operator/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.133034 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-n9lhg_da376ee2-11ae-493e-9e4d-d8ac6fadfb53/perses-operator/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.555520 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-n6l95_56c64e8f-cd1a-468a-a526-ed7c1ff5ac88/cert-manager-controller/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.568741 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-qc2vc_abe35b4f-1ae8-4e82-8b22-5f2d8fe01445/cert-manager-cainjector/0.log" Jan 22 12:18:39 crc kubenswrapper[5120]: I0122 12:18:39.580524 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-r299r_fab5bde7-2cb3-4840-955e-6eec20d29b5d/cert-manager-webhook/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.082670 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-n6l95_56c64e8f-cd1a-468a-a526-ed7c1ff5ac88/cert-manager-controller/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.094695 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-qc2vc_abe35b4f-1ae8-4e82-8b22-5f2d8fe01445/cert-manager-cainjector/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.115176 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-r299r_fab5bde7-2cb3-4840-955e-6eec20d29b5d/cert-manager-webhook/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.597186 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-fhxb8_3cc31b0e-b225-470f-870b-f89666eae47b/control-plane-machine-set-operator/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.610528 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/kube-rbac-proxy/0.log" Jan 22 12:18:40 crc kubenswrapper[5120]: I0122 12:18:40.621172 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-x2rhp_dfeef834-363c-4dff-a170-acd203607c65/machine-api-operator/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.201634 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/alertmanager/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.212760 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.222024 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.241460 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_88fc8b5e-6a79-414c-8a72-7447f8db3056/init-config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.253314 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_25098451-fba7-406a-8973-0df221d16bda/curl/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.265721 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.266216 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.272846 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7cd8d6fc85-dc789_c5a872b8-950f-422a-9b1d-aaf761e5295c/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.286153 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.292317 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.292575 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.298095 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-p862f_e3b00756-b775-4a1c-90b1-852a7f1712b7/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.310481 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.311384 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.316645 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86764d7bdc-pzf4v_f2b79a21-0ce0-4563-9ea9-d7cd1e19652d/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.329621 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.340920 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.342773 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.347752 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-j5zq8_d3caee9e-30bb-45fe-8ff9-2ef2a5f6d9a2/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.360703 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.378402 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/bridge/2.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.378670 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/bridge/1.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.383930 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7np5x_9836015c-341f-44a4-a0b1-2d155148b264/sg-core/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.402595 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-48w6f_a388a8ad-2606-4be5-9640-e8b11efa3daa/default-interconnect/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.413888 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-4xz7b_cb40028b-f955-4b75-b559-a1c4ec5c9256/prometheus-webhook-snmp/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.467104 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-796f77fbdf-t9sbr_164c4d54-e519-4e1e-9e4b-3e2881312d55/manager/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.492357 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elasticsearch/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.507977 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elastic-internal-init-filesystem/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.514546 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_d6cd7adc-81ad-4b43-bd4c-7f48f1df35be/elastic-internal-suspend/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.532514 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-78b9bd8798-sd4wv_b6e8a299-2880-4236-8f8b-b6983db7ed96/interconnect-operator/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.548848 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/prometheus/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.558007 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.566032 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/oauth-proxy/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.574917 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_af3a73d7-3578-4530-9916-0c3613d55591/init-config-reloader/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.639885 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/docker-build/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.647397 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/git-clone/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.655554 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_aec972f4-74cd-403c-a0a5-2e56146e5aa2/manage-dockerfile/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.671631 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_17ccb7ef-92f9-4fe2-aeac-92f706339496/qdr/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.729653 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/docker-build/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.736194 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/git-clone/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.748329 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_22ca9e65-c1f9-472a-8795-d6806d6bf7e0/manage-dockerfile/0.log" Jan 22 12:18:41 crc kubenswrapper[5120]: I0122 12:18:41.995025 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-69f575f8bc-9msdn_71c6d75c-6634-4017-92b9-487a57bcc47b/operator/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.050360 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/docker-build/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.056704 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/git-clone/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.067764 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_76125ec9-7200-4d9a-8632-4f6a653c434c/manage-dockerfile/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.124222 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/docker-build/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.132103 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/git-clone/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.142533 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_4f1f5ecd-00ad-4747-b1eb-d701595508ad/manage-dockerfile/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.219049 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/docker-build/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.228367 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/git-clone/0.log" Jan 22 12:18:42 crc kubenswrapper[5120]: I0122 12:18:42.235313 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_379c9b40-0f89-404c-ba85-6b98c4a35a4f/manage-dockerfile/0.log" Jan 22 12:18:46 crc kubenswrapper[5120]: I0122 12:18:46.134790 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-84c66d88-wp5jc_8f9d3100-17a5-4c92-bf93-17c74efea49f/operator/0.log" Jan 22 12:18:46 crc kubenswrapper[5120]: I0122 12:18:46.165912 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-collectd/0.log" Jan 22 12:18:46 crc kubenswrapper[5120]: I0122 12:18:46.174759 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-xm4v9_1f7c177d-a587-4302-b084-7d4c780bf78b/smoketest-ceilometer/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.609356 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.610849 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/1.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.625228 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/kube-multus-additional-cni-plugins/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.637631 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/egress-router-binary-copy/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.643015 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/cni-plugins/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.652521 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/bond-cni-plugin/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.659817 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/routeoverride-cni/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.669074 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/whereabouts-cni-bincopy/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.676911 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rg989_97df0621-ddba-4462-8134-59bc671c7351/whereabouts-cni/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.693773 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-dp8rm_da2b1465-54c1-4a7d-8cb6-755b28e448b8/multus-admission-controller/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.703862 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-dp8rm_da2b1465-54c1-4a7d-8cb6-755b28e448b8/kube-rbac-proxy/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.735848 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ldwx4_dababdca-8afb-452f-865f-54de3aec21d9/network-metrics-daemon/0.log" Jan 22 12:18:47 crc kubenswrapper[5120]: I0122 12:18:47.742539 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ldwx4_dababdca-8afb-452f-865f-54de3aec21d9/kube-rbac-proxy/0.log" Jan 22 12:18:49 crc kubenswrapper[5120]: I0122 12:18:49.571835 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:18:49 crc kubenswrapper[5120]: E0122 12:18:49.572242 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:01 crc kubenswrapper[5120]: I0122 12:19:01.576606 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:01 crc kubenswrapper[5120]: E0122 12:19:01.578205 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:01 crc kubenswrapper[5120]: I0122 12:19:01.590281 5120 scope.go:117] "RemoveContainer" containerID="afab18be716ae606d212e93ff4cb99381fd77d17295864dd09555b0262bbf573" Jan 22 12:19:13 crc kubenswrapper[5120]: I0122 12:19:13.572972 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:13 crc kubenswrapper[5120]: E0122 12:19:13.573938 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:25 crc kubenswrapper[5120]: I0122 12:19:25.575690 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:25 crc kubenswrapper[5120]: E0122 12:19:25.576631 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:40 crc kubenswrapper[5120]: I0122 12:19:40.572423 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:40 crc kubenswrapper[5120]: E0122 12:19:40.574098 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:19:53 crc kubenswrapper[5120]: I0122 12:19:53.572537 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:19:53 crc kubenswrapper[5120]: E0122 12:19:53.573933 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.137357 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.139163 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerName="oc" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.139181 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerName="oc" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.139352 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" containerName="oc" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.170194 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.170356 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.172783 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.173739 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.173868 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.288584 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"auto-csr-approver-29484740-pq7hx\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.390710 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"auto-csr-approver-29484740-pq7hx\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.435028 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"auto-csr-approver-29484740-pq7hx\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.492535 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.777012 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:20:00 crc kubenswrapper[5120]: I0122 12:20:00.881319 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" event={"ID":"6609faf3-2234-4edf-96b2-132b3e0c23c4","Type":"ContainerStarted","Data":"1dbc83eb8b60b8c281711b1ff11c4c1df7346ea7be55f0b03d29a0db49d9cf67"} Jan 22 12:20:02 crc kubenswrapper[5120]: I0122 12:20:02.904663 5120 generic.go:358] "Generic (PLEG): container finished" podID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerID="be0e7176f01a842ccbd6627161b56398b3ffe33051efd8876db22a192b4801d2" exitCode=0 Jan 22 12:20:02 crc kubenswrapper[5120]: I0122 12:20:02.904730 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" event={"ID":"6609faf3-2234-4edf-96b2-132b3e0c23c4","Type":"ContainerDied","Data":"be0e7176f01a842ccbd6627161b56398b3ffe33051efd8876db22a192b4801d2"} Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.191620 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.257412 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") pod \"6609faf3-2234-4edf-96b2-132b3e0c23c4\" (UID: \"6609faf3-2234-4edf-96b2-132b3e0c23c4\") " Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.266635 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5" (OuterVolumeSpecName: "kube-api-access-xk6x5") pod "6609faf3-2234-4edf-96b2-132b3e0c23c4" (UID: "6609faf3-2234-4edf-96b2-132b3e0c23c4"). InnerVolumeSpecName "kube-api-access-xk6x5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.359413 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xk6x5\" (UniqueName: \"kubernetes.io/projected/6609faf3-2234-4edf-96b2-132b3e0c23c4-kube-api-access-xk6x5\") on node \"crc\" DevicePath \"\"" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.571923 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:04 crc kubenswrapper[5120]: E0122 12:20:04.572685 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.931571 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" event={"ID":"6609faf3-2234-4edf-96b2-132b3e0c23c4","Type":"ContainerDied","Data":"1dbc83eb8b60b8c281711b1ff11c4c1df7346ea7be55f0b03d29a0db49d9cf67"} Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.932098 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dbc83eb8b60b8c281711b1ff11c4c1df7346ea7be55f0b03d29a0db49d9cf67" Jan 22 12:20:04 crc kubenswrapper[5120]: I0122 12:20:04.931605 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484740-pq7hx" Jan 22 12:20:05 crc kubenswrapper[5120]: I0122 12:20:05.270227 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:20:05 crc kubenswrapper[5120]: I0122 12:20:05.279668 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484734-7jmnm"] Jan 22 12:20:05 crc kubenswrapper[5120]: I0122 12:20:05.590077 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c1b3bc9-3782-474e-a90c-86f0ba86fa6a" path="/var/lib/kubelet/pods/2c1b3bc9-3782-474e-a90c-86f0ba86fa6a/volumes" Jan 22 12:20:16 crc kubenswrapper[5120]: I0122 12:20:16.572589 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:16 crc kubenswrapper[5120]: E0122 12:20:16.574214 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:28 crc kubenswrapper[5120]: I0122 12:20:28.574707 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:28 crc kubenswrapper[5120]: E0122 12:20:28.576808 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:41 crc kubenswrapper[5120]: I0122 12:20:41.571943 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:41 crc kubenswrapper[5120]: E0122 12:20:41.573030 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.038818 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.040874 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerName="oc" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.040889 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerName="oc" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.041087 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" containerName="oc" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.047347 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.071305 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.169564 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.169697 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.169816 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.271657 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.271744 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.271822 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.272428 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.272477 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.300477 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"redhat-operators-cd8qm\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.392976 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.654237 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:20:54 crc kubenswrapper[5120]: I0122 12:20:54.665700 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.532145 5120 generic.go:358] "Generic (PLEG): container finished" podID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" exitCode=0 Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.532352 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de"} Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.532661 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerStarted","Data":"bda931994cbf69f925888b33d7ff244764b67e604b557430bec63fa583126f66"} Jan 22 12:20:55 crc kubenswrapper[5120]: I0122 12:20:55.579583 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:20:55 crc kubenswrapper[5120]: E0122 12:20:55.579993 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:20:57 crc kubenswrapper[5120]: I0122 12:20:57.556465 5120 generic.go:358] "Generic (PLEG): container finished" podID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" exitCode=0 Jan 22 12:20:57 crc kubenswrapper[5120]: I0122 12:20:57.556563 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f"} Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.572105 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerStarted","Data":"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4"} Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.618276 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cd8qm" podStartSLOduration=3.460111542 podStartE2EDuration="4.618251715s" podCreationTimestamp="2026-01-22 12:20:54 +0000 UTC" firstStartedPulling="2026-01-22 12:20:55.533463527 +0000 UTC m=+1990.277411868" lastFinishedPulling="2026-01-22 12:20:56.6916037 +0000 UTC m=+1991.435552041" observedRunningTime="2026-01-22 12:20:58.599337709 +0000 UTC m=+1993.343286050" watchObservedRunningTime="2026-01-22 12:20:58.618251715 +0000 UTC m=+1993.362200056" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.814350 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.821728 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.837833 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.960947 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.961479 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:58 crc kubenswrapper[5120]: I0122 12:20:58.961523 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.063854 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064006 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064084 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064639 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.064746 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.099175 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"certified-operators-xd9rs\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.151616 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.495110 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:20:59 crc kubenswrapper[5120]: I0122 12:20:59.590375 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerStarted","Data":"0416a99eafcad5b84de8e060b9ad9afbc806dd6a6d5f802cf815e9fb58c4057a"} Jan 22 12:21:00 crc kubenswrapper[5120]: I0122 12:21:00.603776 5120 generic.go:358] "Generic (PLEG): container finished" podID="50b831c9-8487-4923-8280-3f8732cc4e62" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" exitCode=0 Jan 22 12:21:00 crc kubenswrapper[5120]: I0122 12:21:00.603895 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c"} Jan 22 12:21:01 crc kubenswrapper[5120]: I0122 12:21:01.616615 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerStarted","Data":"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d"} Jan 22 12:21:01 crc kubenswrapper[5120]: I0122 12:21:01.769179 5120 scope.go:117] "RemoveContainer" containerID="21b98295bffce8d00861339ce4655dd1e74538d2d7b8c008a2e3013d23d808e0" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.394118 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.394591 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.460695 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.647724 5120 generic.go:358] "Generic (PLEG): container finished" podID="50b831c9-8487-4923-8280-3f8732cc4e62" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" exitCode=0 Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.648207 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d"} Jan 22 12:21:04 crc kubenswrapper[5120]: I0122 12:21:04.698113 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.611522 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.674882 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cd8qm" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" containerID="cri-o://4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" gracePeriod=2 Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.675406 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerStarted","Data":"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806"} Jan 22 12:21:06 crc kubenswrapper[5120]: I0122 12:21:06.701784 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xd9rs" podStartSLOduration=8.080341133 podStartE2EDuration="8.701760171s" podCreationTimestamp="2026-01-22 12:20:58 +0000 UTC" firstStartedPulling="2026-01-22 12:21:00.605068499 +0000 UTC m=+1995.349016850" lastFinishedPulling="2026-01-22 12:21:01.226487547 +0000 UTC m=+1995.970435888" observedRunningTime="2026-01-22 12:21:06.701553307 +0000 UTC m=+2001.445501648" watchObservedRunningTime="2026-01-22 12:21:06.701760171 +0000 UTC m=+2001.445708512" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.123595 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.225696 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") pod \"95d21ef3-45db-4786-bb22-1a4660b26e98\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.226268 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") pod \"95d21ef3-45db-4786-bb22-1a4660b26e98\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.226506 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") pod \"95d21ef3-45db-4786-bb22-1a4660b26e98\" (UID: \"95d21ef3-45db-4786-bb22-1a4660b26e98\") " Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.227378 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities" (OuterVolumeSpecName: "utilities") pod "95d21ef3-45db-4786-bb22-1a4660b26e98" (UID: "95d21ef3-45db-4786-bb22-1a4660b26e98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.234883 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96" (OuterVolumeSpecName: "kube-api-access-xcw96") pod "95d21ef3-45db-4786-bb22-1a4660b26e98" (UID: "95d21ef3-45db-4786-bb22-1a4660b26e98"). InnerVolumeSpecName "kube-api-access-xcw96". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.330544 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.330615 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xcw96\" (UniqueName: \"kubernetes.io/projected/95d21ef3-45db-4786-bb22-1a4660b26e98-kube-api-access-xcw96\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.343334 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95d21ef3-45db-4786-bb22-1a4660b26e98" (UID: "95d21ef3-45db-4786-bb22-1a4660b26e98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.432197 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d21ef3-45db-4786-bb22-1a4660b26e98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694176 5120 generic.go:358] "Generic (PLEG): container finished" podID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" exitCode=0 Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694376 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd8qm" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694358 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4"} Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694482 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd8qm" event={"ID":"95d21ef3-45db-4786-bb22-1a4660b26e98","Type":"ContainerDied","Data":"bda931994cbf69f925888b33d7ff244764b67e604b557430bec63fa583126f66"} Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.694519 5120 scope.go:117] "RemoveContainer" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.728895 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.732472 5120 scope.go:117] "RemoveContainer" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.740189 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cd8qm"] Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.756020 5120 scope.go:117] "RemoveContainer" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.805893 5120 scope.go:117] "RemoveContainer" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" Jan 22 12:21:07 crc kubenswrapper[5120]: E0122 12:21:07.806458 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4\": container with ID starting with 4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4 not found: ID does not exist" containerID="4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806497 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4"} err="failed to get container status \"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4\": rpc error: code = NotFound desc = could not find container \"4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4\": container with ID starting with 4af5eb64c5b39297250506219ad7a7594c169f009b8044fb8e6b1549986001f4 not found: ID does not exist" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806522 5120 scope.go:117] "RemoveContainer" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" Jan 22 12:21:07 crc kubenswrapper[5120]: E0122 12:21:07.806780 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f\": container with ID starting with a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f not found: ID does not exist" containerID="a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806805 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f"} err="failed to get container status \"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f\": rpc error: code = NotFound desc = could not find container \"a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f\": container with ID starting with a4e04d1c02cb72863eb4774640370a060d163ad85ce10d9ec7a9e22a3b90570f not found: ID does not exist" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.806821 5120 scope.go:117] "RemoveContainer" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" Jan 22 12:21:07 crc kubenswrapper[5120]: E0122 12:21:07.807091 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de\": container with ID starting with a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de not found: ID does not exist" containerID="a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de" Jan 22 12:21:07 crc kubenswrapper[5120]: I0122 12:21:07.807116 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de"} err="failed to get container status \"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de\": rpc error: code = NotFound desc = could not find container \"a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de\": container with ID starting with a10cdc4d6af835d0c106444f8cac9e5feaa8b3d4e95f587d91c7532f747904de not found: ID does not exist" Jan 22 12:21:08 crc kubenswrapper[5120]: I0122 12:21:08.584520 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:08 crc kubenswrapper[5120]: E0122 12:21:08.585301 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.152115 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.153391 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.198443 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:09 crc kubenswrapper[5120]: I0122 12:21:09.583759 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" path="/var/lib/kubelet/pods/95d21ef3-45db-4786-bb22-1a4660b26e98/volumes" Jan 22 12:21:19 crc kubenswrapper[5120]: I0122 12:21:19.775070 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:19 crc kubenswrapper[5120]: I0122 12:21:19.833001 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:21:19 crc kubenswrapper[5120]: I0122 12:21:19.833361 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xd9rs" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" containerID="cri-o://f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" gracePeriod=2 Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.794311 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836008 5120 generic.go:358] "Generic (PLEG): container finished" podID="50b831c9-8487-4923-8280-3f8732cc4e62" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" exitCode=0 Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836311 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806"} Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836345 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd9rs" event={"ID":"50b831c9-8487-4923-8280-3f8732cc4e62","Type":"ContainerDied","Data":"0416a99eafcad5b84de8e060b9ad9afbc806dd6a6d5f802cf815e9fb58c4057a"} Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836363 5120 scope.go:117] "RemoveContainer" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.836382 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd9rs" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.859975 5120 scope.go:117] "RemoveContainer" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.889629 5120 scope.go:117] "RemoveContainer" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.918918 5120 scope.go:117] "RemoveContainer" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" Jan 22 12:21:20 crc kubenswrapper[5120]: E0122 12:21:20.919431 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806\": container with ID starting with f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806 not found: ID does not exist" containerID="f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919485 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806"} err="failed to get container status \"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806\": rpc error: code = NotFound desc = could not find container \"f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806\": container with ID starting with f501a8db3eb1e57f2fd0469475551120fd166c35ffac9fb09f48be064f772806 not found: ID does not exist" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919512 5120 scope.go:117] "RemoveContainer" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" Jan 22 12:21:20 crc kubenswrapper[5120]: E0122 12:21:20.919809 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d\": container with ID starting with a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d not found: ID does not exist" containerID="a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919828 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d"} err="failed to get container status \"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d\": rpc error: code = NotFound desc = could not find container \"a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d\": container with ID starting with a958e740e19c76adb27f5b484cc01cdd94266eacf1fe26b44db60a4fd8a0967d not found: ID does not exist" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.919841 5120 scope.go:117] "RemoveContainer" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" Jan 22 12:21:20 crc kubenswrapper[5120]: E0122 12:21:20.920174 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c\": container with ID starting with 9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c not found: ID does not exist" containerID="9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.920211 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c"} err="failed to get container status \"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c\": rpc error: code = NotFound desc = could not find container \"9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c\": container with ID starting with 9b5793ff8582e2824757b1af0999c2c9b1a88a720de9663ab2a55a1bb122210c not found: ID does not exist" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.937222 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") pod \"50b831c9-8487-4923-8280-3f8732cc4e62\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.937393 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") pod \"50b831c9-8487-4923-8280-3f8732cc4e62\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.938627 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities" (OuterVolumeSpecName: "utilities") pod "50b831c9-8487-4923-8280-3f8732cc4e62" (UID: "50b831c9-8487-4923-8280-3f8732cc4e62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.938893 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") pod \"50b831c9-8487-4923-8280-3f8732cc4e62\" (UID: \"50b831c9-8487-4923-8280-3f8732cc4e62\") " Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.939412 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.946244 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x" (OuterVolumeSpecName: "kube-api-access-9r46x") pod "50b831c9-8487-4923-8280-3f8732cc4e62" (UID: "50b831c9-8487-4923-8280-3f8732cc4e62"). InnerVolumeSpecName "kube-api-access-9r46x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:21:20 crc kubenswrapper[5120]: I0122 12:21:20.975822 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50b831c9-8487-4923-8280-3f8732cc4e62" (UID: "50b831c9-8487-4923-8280-3f8732cc4e62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.041654 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9r46x\" (UniqueName: \"kubernetes.io/projected/50b831c9-8487-4923-8280-3f8732cc4e62-kube-api-access-9r46x\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.041722 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b831c9-8487-4923-8280-3f8732cc4e62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.190402 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.202543 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xd9rs"] Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.580272 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:21 crc kubenswrapper[5120]: E0122 12:21:21.580757 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:21 crc kubenswrapper[5120]: I0122 12:21:21.586291 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" path="/var/lib/kubelet/pods/50b831c9-8487-4923-8280-3f8732cc4e62/volumes" Jan 22 12:21:32 crc kubenswrapper[5120]: I0122 12:21:32.572563 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:32 crc kubenswrapper[5120]: E0122 12:21:32.573720 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:44 crc kubenswrapper[5120]: I0122 12:21:44.573231 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:44 crc kubenswrapper[5120]: E0122 12:21:44.574822 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:21:58 crc kubenswrapper[5120]: I0122 12:21:58.571805 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:21:58 crc kubenswrapper[5120]: E0122 12:21:58.574037 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.147661 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.150804 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.150897 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.150971 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151064 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151233 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151295 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-content" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151376 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151445 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151511 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151580 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="extract-utilities" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151655 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151710 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.151948 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="95d21ef3-45db-4786-bb22-1a4660b26e98" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.152066 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="50b831c9-8487-4923-8280-3f8732cc4e62" containerName="registry-server" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.164828 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.170458 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.171078 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.171171 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.174121 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.260543 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"auto-csr-approver-29484742-4b4pf\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.365891 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"auto-csr-approver-29484742-4b4pf\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.402337 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"auto-csr-approver-29484742-4b4pf\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.499338 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:00 crc kubenswrapper[5120]: I0122 12:22:00.748925 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:22:01 crc kubenswrapper[5120]: I0122 12:22:01.423073 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" event={"ID":"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5","Type":"ContainerStarted","Data":"1ff3314b07795c055a18c27ee0391aa973a5712bd59e9d8ea772ee9b7d1566e6"} Jan 22 12:22:03 crc kubenswrapper[5120]: I0122 12:22:03.445315 5120 generic.go:358] "Generic (PLEG): container finished" podID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerID="db11fbf4c05e98a727f7dde0c0bea3704c2e71605b0732b118ce9ceec98d8a9e" exitCode=0 Jan 22 12:22:03 crc kubenswrapper[5120]: I0122 12:22:03.445400 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" event={"ID":"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5","Type":"ContainerDied","Data":"db11fbf4c05e98a727f7dde0c0bea3704c2e71605b0732b118ce9ceec98d8a9e"} Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.732404 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.750474 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") pod \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\" (UID: \"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5\") " Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.763515 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr" (OuterVolumeSpecName: "kube-api-access-trbpr") pod "4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" (UID: "4bfb1f97-ca93-4138-99d0-06fcb09ba8f5"). InnerVolumeSpecName "kube-api-access-trbpr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:22:04 crc kubenswrapper[5120]: I0122 12:22:04.852236 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trbpr\" (UniqueName: \"kubernetes.io/projected/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5-kube-api-access-trbpr\") on node \"crc\" DevicePath \"\"" Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.470260 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" event={"ID":"4bfb1f97-ca93-4138-99d0-06fcb09ba8f5","Type":"ContainerDied","Data":"1ff3314b07795c055a18c27ee0391aa973a5712bd59e9d8ea772ee9b7d1566e6"} Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.470396 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ff3314b07795c055a18c27ee0391aa973a5712bd59e9d8ea772ee9b7d1566e6" Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.470303 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484742-4b4pf" Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.817647 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:22:05 crc kubenswrapper[5120]: I0122 12:22:05.825896 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484736-5pvc5"] Jan 22 12:22:07 crc kubenswrapper[5120]: I0122 12:22:07.584566 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca" path="/var/lib/kubelet/pods/5256dd8c-d5c9-4b8f-8e6e-6fa5175741ca/volumes" Jan 22 12:22:12 crc kubenswrapper[5120]: I0122 12:22:12.573064 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:22:13 crc kubenswrapper[5120]: I0122 12:22:13.563214 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b"} Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.490220 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.490247 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.506531 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:22:51 crc kubenswrapper[5120]: I0122 12:22:51.506537 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:23:02 crc kubenswrapper[5120]: I0122 12:23:02.000605 5120 scope.go:117] "RemoveContainer" containerID="7dd5e09283dddb7bf8d7833ea438fcac480d32b32def3f4fc53d049422374e23" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.669042 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.670632 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerName="oc" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.670653 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerName="oc" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.670873 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" containerName="oc" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.737155 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.737421 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.907365 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.907462 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:08 crc kubenswrapper[5120]: I0122 12:23:08.907885 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.009873 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010172 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010374 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010573 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.010666 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.046608 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"community-operators-m7ncp\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.070194 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:09 crc kubenswrapper[5120]: I0122 12:23:09.541269 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:10 crc kubenswrapper[5120]: I0122 12:23:10.125190 5120 generic.go:358] "Generic (PLEG): container finished" podID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerID="d70f3ae8f6dc4c8ca9c28d1bbd4219e78f001f6cd7ce80719951d1cacaa18b82" exitCode=0 Jan 22 12:23:10 crc kubenswrapper[5120]: I0122 12:23:10.125398 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"d70f3ae8f6dc4c8ca9c28d1bbd4219e78f001f6cd7ce80719951d1cacaa18b82"} Jan 22 12:23:10 crc kubenswrapper[5120]: I0122 12:23:10.125434 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerStarted","Data":"9b088af40036e4f9a0b2f6a7b5932fb6d2a72cbf2192757269776a56fb425ec6"} Jan 22 12:23:11 crc kubenswrapper[5120]: I0122 12:23:11.133684 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerStarted","Data":"3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9"} Jan 22 12:23:12 crc kubenswrapper[5120]: I0122 12:23:12.151228 5120 generic.go:358] "Generic (PLEG): container finished" podID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerID="3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9" exitCode=0 Jan 22 12:23:12 crc kubenswrapper[5120]: I0122 12:23:12.152032 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9"} Jan 22 12:23:13 crc kubenswrapper[5120]: I0122 12:23:13.163604 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerStarted","Data":"f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec"} Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.076099 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.087002 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.132254 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.155264 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7ncp" podStartSLOduration=10.468687468 podStartE2EDuration="11.15524585s" podCreationTimestamp="2026-01-22 12:23:08 +0000 UTC" firstStartedPulling="2026-01-22 12:23:10.128841987 +0000 UTC m=+2124.872790368" lastFinishedPulling="2026-01-22 12:23:10.815400409 +0000 UTC m=+2125.559348750" observedRunningTime="2026-01-22 12:23:13.189004189 +0000 UTC m=+2127.932952540" watchObservedRunningTime="2026-01-22 12:23:19.15524585 +0000 UTC m=+2133.899194191" Jan 22 12:23:19 crc kubenswrapper[5120]: I0122 12:23:19.257939 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:22 crc kubenswrapper[5120]: I0122 12:23:22.837477 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:22 crc kubenswrapper[5120]: I0122 12:23:22.838618 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7ncp" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" containerID="cri-o://f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec" gracePeriod=2 Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.258898 5120 generic.go:358] "Generic (PLEG): container finished" podID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerID="f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec" exitCode=0 Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.258994 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec"} Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.744858 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.880187 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") pod \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.880499 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") pod \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.892234 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6" (OuterVolumeSpecName: "kube-api-access-vgpt6") pod "181146be-5e90-40cd-bd8f-63dd9bf20dc7" (UID: "181146be-5e90-40cd-bd8f-63dd9bf20dc7"). InnerVolumeSpecName "kube-api-access-vgpt6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.915237 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") pod \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\" (UID: \"181146be-5e90-40cd-bd8f-63dd9bf20dc7\") " Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.918274 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vgpt6\" (UniqueName: \"kubernetes.io/projected/181146be-5e90-40cd-bd8f-63dd9bf20dc7-kube-api-access-vgpt6\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.929048 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities" (OuterVolumeSpecName: "utilities") pod "181146be-5e90-40cd-bd8f-63dd9bf20dc7" (UID: "181146be-5e90-40cd-bd8f-63dd9bf20dc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:23:23 crc kubenswrapper[5120]: I0122 12:23:23.986554 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "181146be-5e90-40cd-bd8f-63dd9bf20dc7" (UID: "181146be-5e90-40cd-bd8f-63dd9bf20dc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.019380 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.019416 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/181146be-5e90-40cd-bd8f-63dd9bf20dc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.270411 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ncp" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.270404 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ncp" event={"ID":"181146be-5e90-40cd-bd8f-63dd9bf20dc7","Type":"ContainerDied","Data":"9b088af40036e4f9a0b2f6a7b5932fb6d2a72cbf2192757269776a56fb425ec6"} Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.270472 5120 scope.go:117] "RemoveContainer" containerID="f30019d835fc83581f6bff42e30a63fce7e5cb4a8d787f37b9ca40d8ea0858ec" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.300781 5120 scope.go:117] "RemoveContainer" containerID="3286e63188551682f09dd95794331863f3e1ab378f00c931ac8e58768cc114a9" Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.314039 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.323026 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7ncp"] Jan 22 12:23:24 crc kubenswrapper[5120]: I0122 12:23:24.332390 5120 scope.go:117] "RemoveContainer" containerID="d70f3ae8f6dc4c8ca9c28d1bbd4219e78f001f6cd7ce80719951d1cacaa18b82" Jan 22 12:23:25 crc kubenswrapper[5120]: I0122 12:23:25.607150 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" path="/var/lib/kubelet/pods/181146be-5e90-40cd-bd8f-63dd9bf20dc7/volumes" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.156493 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159051 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-content" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159081 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-content" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159132 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159144 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159167 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-utilities" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159180 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="extract-utilities" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.159477 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="181146be-5e90-40cd-bd8f-63dd9bf20dc7" containerName="registry-server" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.183523 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.183772 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.188335 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.191937 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.193906 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.237695 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"auto-csr-approver-29484744-7g58z\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.340033 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"auto-csr-approver-29484744-7g58z\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.365678 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"auto-csr-approver-29484744-7g58z\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:00 crc kubenswrapper[5120]: I0122 12:24:00.504939 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:01 crc kubenswrapper[5120]: W0122 12:24:01.001097 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cde2753_8f27_404a_8fbc_d297e718b3b8.slice/crio-54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1 WatchSource:0}: Error finding container 54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1: Status 404 returned error can't find the container with id 54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1 Jan 22 12:24:01 crc kubenswrapper[5120]: I0122 12:24:01.010782 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:24:01 crc kubenswrapper[5120]: I0122 12:24:01.641163 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerStarted","Data":"54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1"} Jan 22 12:24:02 crc kubenswrapper[5120]: I0122 12:24:02.651334 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerStarted","Data":"265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83"} Jan 22 12:24:02 crc kubenswrapper[5120]: I0122 12:24:02.678853 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484744-7g58z" podStartSLOduration=1.548985007 podStartE2EDuration="2.678820887s" podCreationTimestamp="2026-01-22 12:24:00 +0000 UTC" firstStartedPulling="2026-01-22 12:24:01.004683347 +0000 UTC m=+2175.748631728" lastFinishedPulling="2026-01-22 12:24:02.134519267 +0000 UTC m=+2176.878467608" observedRunningTime="2026-01-22 12:24:02.669702902 +0000 UTC m=+2177.413651263" watchObservedRunningTime="2026-01-22 12:24:02.678820887 +0000 UTC m=+2177.422769228" Jan 22 12:24:03 crc kubenswrapper[5120]: I0122 12:24:03.663354 5120 generic.go:358] "Generic (PLEG): container finished" podID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerID="265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83" exitCode=0 Jan 22 12:24:03 crc kubenswrapper[5120]: I0122 12:24:03.663479 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerDied","Data":"265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83"} Jan 22 12:24:04 crc kubenswrapper[5120]: I0122 12:24:04.977383 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.139566 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") pod \"9cde2753-8f27-404a-8fbc-d297e718b3b8\" (UID: \"9cde2753-8f27-404a-8fbc-d297e718b3b8\") " Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.150124 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv" (OuterVolumeSpecName: "kube-api-access-k44cv") pod "9cde2753-8f27-404a-8fbc-d297e718b3b8" (UID: "9cde2753-8f27-404a-8fbc-d297e718b3b8"). InnerVolumeSpecName "kube-api-access-k44cv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.247625 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k44cv\" (UniqueName: \"kubernetes.io/projected/9cde2753-8f27-404a-8fbc-d297e718b3b8-kube-api-access-k44cv\") on node \"crc\" DevicePath \"\"" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.694550 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484744-7g58z" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.694567 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484744-7g58z" event={"ID":"9cde2753-8f27-404a-8fbc-d297e718b3b8","Type":"ContainerDied","Data":"54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1"} Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.694644 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d7e099ba1a58235caba17b1537e1adc610f767817d2f9845eae2f46eba12f1" Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.759987 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:24:05 crc kubenswrapper[5120]: I0122 12:24:05.768983 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484738-tfzpk"] Jan 22 12:24:07 crc kubenswrapper[5120]: I0122 12:24:07.590925 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97383a0-beb0-4ff9-a965-28e0e9b1addb" path="/var/lib/kubelet/pods/f97383a0-beb0-4ff9-a965-28e0e9b1addb/volumes" Jan 22 12:24:31 crc kubenswrapper[5120]: I0122 12:24:31.973037 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:24:31 crc kubenswrapper[5120]: I0122 12:24:31.973951 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:25:01 crc kubenswrapper[5120]: I0122 12:25:01.972984 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:25:01 crc kubenswrapper[5120]: I0122 12:25:01.974004 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:25:02 crc kubenswrapper[5120]: I0122 12:25:02.229030 5120 scope.go:117] "RemoveContainer" containerID="5130cc2c660ed67d488de9c861af0f840a6694cd424858313d97ed3425c416ca" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.973594 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.977213 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.977486 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.978768 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:25:31 crc kubenswrapper[5120]: I0122 12:25:31.978987 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b" gracePeriod=600 Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.117103 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b" exitCode=0 Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.117336 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b"} Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.120247 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07"} Jan 22 12:25:33 crc kubenswrapper[5120]: I0122 12:25:33.120335 5120 scope.go:117] "RemoveContainer" containerID="eda097a757f91e81d87c633c172aa5f1c9e7f79ccd5da35f6dbb6ffc692dc58d" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.148235 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.150883 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerName="oc" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.150904 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerName="oc" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.151119 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" containerName="oc" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.159389 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.166118 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.166444 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.166781 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.170659 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.242200 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"auto-csr-approver-29484746-xsmfp\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.344890 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"auto-csr-approver-29484746-xsmfp\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.398011 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"auto-csr-approver-29484746-xsmfp\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:00 crc kubenswrapper[5120]: I0122 12:26:00.491232 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:01 crc kubenswrapper[5120]: I0122 12:26:01.015111 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:26:01 crc kubenswrapper[5120]: I0122 12:26:01.022365 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:26:01 crc kubenswrapper[5120]: I0122 12:26:01.430020 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerStarted","Data":"743d82ca419ab80bcf1ec658824f6c99ac6ac74f8bc9a95079e00f3eb1a56da0"} Jan 22 12:26:02 crc kubenswrapper[5120]: I0122 12:26:02.441052 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerStarted","Data":"6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7"} Jan 22 12:26:02 crc kubenswrapper[5120]: I0122 12:26:02.465720 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" podStartSLOduration=1.608265412 podStartE2EDuration="2.465697299s" podCreationTimestamp="2026-01-22 12:26:00 +0000 UTC" firstStartedPulling="2026-01-22 12:26:01.022540988 +0000 UTC m=+2295.766489329" lastFinishedPulling="2026-01-22 12:26:01.879972865 +0000 UTC m=+2296.623921216" observedRunningTime="2026-01-22 12:26:02.459146757 +0000 UTC m=+2297.203095118" watchObservedRunningTime="2026-01-22 12:26:02.465697299 +0000 UTC m=+2297.209645640" Jan 22 12:26:03 crc kubenswrapper[5120]: I0122 12:26:03.452648 5120 generic.go:358] "Generic (PLEG): container finished" podID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerID="6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7" exitCode=0 Jan 22 12:26:03 crc kubenswrapper[5120]: I0122 12:26:03.452830 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerDied","Data":"6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7"} Jan 22 12:26:04 crc kubenswrapper[5120]: I0122 12:26:04.810141 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:04 crc kubenswrapper[5120]: I0122 12:26:04.928983 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") pod \"7cba6b62-807f-4e37-b350-bc4eef13747b\" (UID: \"7cba6b62-807f-4e37-b350-bc4eef13747b\") " Jan 22 12:26:04 crc kubenswrapper[5120]: I0122 12:26:04.938198 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg" (OuterVolumeSpecName: "kube-api-access-zc9qg") pod "7cba6b62-807f-4e37-b350-bc4eef13747b" (UID: "7cba6b62-807f-4e37-b350-bc4eef13747b"). InnerVolumeSpecName "kube-api-access-zc9qg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.031710 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc9qg\" (UniqueName: \"kubernetes.io/projected/7cba6b62-807f-4e37-b350-bc4eef13747b-kube-api-access-zc9qg\") on node \"crc\" DevicePath \"\"" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.473281 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" event={"ID":"7cba6b62-807f-4e37-b350-bc4eef13747b","Type":"ContainerDied","Data":"743d82ca419ab80bcf1ec658824f6c99ac6ac74f8bc9a95079e00f3eb1a56da0"} Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.473358 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="743d82ca419ab80bcf1ec658824f6c99ac6ac74f8bc9a95079e00f3eb1a56da0" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.473477 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484746-xsmfp" Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.545615 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.552324 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484740-pq7hx"] Jan 22 12:26:05 crc kubenswrapper[5120]: I0122 12:26:05.583132 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6609faf3-2234-4edf-96b2-132b3e0c23c4" path="/var/lib/kubelet/pods/6609faf3-2234-4edf-96b2-132b3e0c23c4/volumes" Jan 22 12:27:02 crc kubenswrapper[5120]: I0122 12:27:02.443025 5120 scope.go:117] "RemoveContainer" containerID="be0e7176f01a842ccbd6627161b56398b3ffe33051efd8876db22a192b4801d2" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.631918 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.633795 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.645736 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:27:51 crc kubenswrapper[5120]: I0122 12:27:51.645854 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.168684 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.171652 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerName="oc" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.171683 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerName="oc" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.172055 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" containerName="oc" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.189925 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.190627 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.195619 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.195732 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.196021 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.344530 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"auto-csr-approver-29484748-kjqpj\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.447520 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"auto-csr-approver-29484748-kjqpj\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.475815 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"auto-csr-approver-29484748-kjqpj\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.519833 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:00 crc kubenswrapper[5120]: I0122 12:28:00.792843 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:28:01 crc kubenswrapper[5120]: I0122 12:28:01.767690 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" event={"ID":"19e6cf90-948d-4188-8603-4f42f5a2400e","Type":"ContainerStarted","Data":"d4d5767ff14553fb1a7f4452986ce02e5768fe2d919221b289de11c8ceb561c2"} Jan 22 12:28:01 crc kubenswrapper[5120]: I0122 12:28:01.972235 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:28:01 crc kubenswrapper[5120]: I0122 12:28:01.972520 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:28:02 crc kubenswrapper[5120]: I0122 12:28:02.779106 5120 generic.go:358] "Generic (PLEG): container finished" podID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerID="87dcaa48bc692cdf9ab6041cfa08659e3160bb8e1c6b034284ede8cacd86f655" exitCode=0 Jan 22 12:28:02 crc kubenswrapper[5120]: I0122 12:28:02.779178 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" event={"ID":"19e6cf90-948d-4188-8603-4f42f5a2400e","Type":"ContainerDied","Data":"87dcaa48bc692cdf9ab6041cfa08659e3160bb8e1c6b034284ede8cacd86f655"} Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.199052 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.261769 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") pod \"19e6cf90-948d-4188-8603-4f42f5a2400e\" (UID: \"19e6cf90-948d-4188-8603-4f42f5a2400e\") " Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.275478 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp" (OuterVolumeSpecName: "kube-api-access-2lsvp") pod "19e6cf90-948d-4188-8603-4f42f5a2400e" (UID: "19e6cf90-948d-4188-8603-4f42f5a2400e"). InnerVolumeSpecName "kube-api-access-2lsvp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.364044 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2lsvp\" (UniqueName: \"kubernetes.io/projected/19e6cf90-948d-4188-8603-4f42f5a2400e-kube-api-access-2lsvp\") on node \"crc\" DevicePath \"\"" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.803350 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" event={"ID":"19e6cf90-948d-4188-8603-4f42f5a2400e","Type":"ContainerDied","Data":"d4d5767ff14553fb1a7f4452986ce02e5768fe2d919221b289de11c8ceb561c2"} Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.803462 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d5767ff14553fb1a7f4452986ce02e5768fe2d919221b289de11c8ceb561c2" Jan 22 12:28:04 crc kubenswrapper[5120]: I0122 12:28:04.804150 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484748-kjqpj" Jan 22 12:28:05 crc kubenswrapper[5120]: I0122 12:28:05.313167 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:28:05 crc kubenswrapper[5120]: I0122 12:28:05.324607 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484742-4b4pf"] Jan 22 12:28:05 crc kubenswrapper[5120]: I0122 12:28:05.597889 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfb1f97-ca93-4138-99d0-06fcb09ba8f5" path="/var/lib/kubelet/pods/4bfb1f97-ca93-4138-99d0-06fcb09ba8f5/volumes" Jan 22 12:28:31 crc kubenswrapper[5120]: I0122 12:28:31.973524 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:28:31 crc kubenswrapper[5120]: I0122 12:28:31.974482 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.973466 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.974393 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.974448 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.975694 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:29:01 crc kubenswrapper[5120]: I0122 12:29:01.975762 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" gracePeriod=600 Jan 22 12:29:02 crc kubenswrapper[5120]: E0122 12:29:02.142552 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.465311 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" exitCode=0 Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.465581 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07"} Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.465647 5120 scope.go:117] "RemoveContainer" containerID="e853360e55cf5a442f891e5c045632b5fe91a8840293356f1cb5a89ddebe318b" Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.466635 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:02 crc kubenswrapper[5120]: E0122 12:29:02.467157 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:02 crc kubenswrapper[5120]: I0122 12:29:02.638634 5120 scope.go:117] "RemoveContainer" containerID="db11fbf4c05e98a727f7dde0c0bea3704c2e71605b0732b118ce9ceec98d8a9e" Jan 22 12:29:13 crc kubenswrapper[5120]: I0122 12:29:13.573677 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:13 crc kubenswrapper[5120]: E0122 12:29:13.574878 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:26 crc kubenswrapper[5120]: I0122 12:29:26.572223 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:26 crc kubenswrapper[5120]: E0122 12:29:26.573690 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:40 crc kubenswrapper[5120]: I0122 12:29:40.572317 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:40 crc kubenswrapper[5120]: E0122 12:29:40.573739 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:29:54 crc kubenswrapper[5120]: I0122 12:29:54.572081 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:29:54 crc kubenswrapper[5120]: E0122 12:29:54.573898 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.162082 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.163891 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerName="oc" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.163910 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerName="oc" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.164187 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" containerName="oc" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.176545 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.179605 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.180597 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.181245 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.182121 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.189945 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.190181 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.194427 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.194732 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.197302 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354345 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354419 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"auto-csr-approver-29484750-sqt7t\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354590 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.354809 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456612 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456720 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456878 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.456932 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"auto-csr-approver-29484750-sqt7t\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.459047 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.493771 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.507479 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"auto-csr-approver-29484750-sqt7t\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.509100 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.511537 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"collect-profiles-29484750-jzzch\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.523268 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:00 crc kubenswrapper[5120]: W0122 12:30:00.785982 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f073b85_c7cf_489a_8e89_7bf6bc9a2124.slice/crio-954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc WatchSource:0}: Error finding container 954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc: Status 404 returned error can't find the container with id 954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.788557 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:30:00 crc kubenswrapper[5120]: I0122 12:30:00.824403 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch"] Jan 22 12:30:00 crc kubenswrapper[5120]: W0122 12:30:00.827464 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aba8941_2ebf_4bf6_94ab_a1b999b2366a.slice/crio-960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3 WatchSource:0}: Error finding container 960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3: Status 404 returned error can't find the container with id 960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3 Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.151129 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" event={"ID":"3f073b85-c7cf-489a-8e89-7bf6bc9a2124","Type":"ContainerStarted","Data":"954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc"} Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.153456 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerStarted","Data":"68b73452fa019a556704c5a2b540f627bf58c904b25685813e5ed80c9863d57b"} Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.153490 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerStarted","Data":"960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3"} Jan 22 12:30:01 crc kubenswrapper[5120]: I0122 12:30:01.181360 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" podStartSLOduration=1.181337171 podStartE2EDuration="1.181337171s" podCreationTimestamp="2026-01-22 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 12:30:01.179270571 +0000 UTC m=+2535.923218922" watchObservedRunningTime="2026-01-22 12:30:01.181337171 +0000 UTC m=+2535.925285662" Jan 22 12:30:02 crc kubenswrapper[5120]: I0122 12:30:02.176082 5120 generic.go:358] "Generic (PLEG): container finished" podID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerID="68b73452fa019a556704c5a2b540f627bf58c904b25685813e5ed80c9863d57b" exitCode=0 Jan 22 12:30:02 crc kubenswrapper[5120]: I0122 12:30:02.176734 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerDied","Data":"68b73452fa019a556704c5a2b540f627bf58c904b25685813e5ed80c9863d57b"} Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.189019 5120 generic.go:358] "Generic (PLEG): container finished" podID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerID="e252cf75f043a8d827ee19582fab16cdd6e6b640af539cb8d97f2f626b48055f" exitCode=0 Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.189080 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" event={"ID":"3f073b85-c7cf-489a-8e89-7bf6bc9a2124","Type":"ContainerDied","Data":"e252cf75f043a8d827ee19582fab16cdd6e6b640af539cb8d97f2f626b48055f"} Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.472773 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.625206 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") pod \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.626179 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") pod \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.626243 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") pod \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\" (UID: \"7aba8941-2ebf-4bf6-94ab-a1b999b2366a\") " Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.627332 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume" (OuterVolumeSpecName: "config-volume") pod "7aba8941-2ebf-4bf6-94ab-a1b999b2366a" (UID: "7aba8941-2ebf-4bf6-94ab-a1b999b2366a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.636567 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7aba8941-2ebf-4bf6-94ab-a1b999b2366a" (UID: "7aba8941-2ebf-4bf6-94ab-a1b999b2366a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.636662 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m" (OuterVolumeSpecName: "kube-api-access-xt75m") pod "7aba8941-2ebf-4bf6-94ab-a1b999b2366a" (UID: "7aba8941-2ebf-4bf6-94ab-a1b999b2366a"). InnerVolumeSpecName "kube-api-access-xt75m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.728273 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.728342 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xt75m\" (UniqueName: \"kubernetes.io/projected/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-kube-api-access-xt75m\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:03 crc kubenswrapper[5120]: I0122 12:30:03.728352 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aba8941-2ebf-4bf6-94ab-a1b999b2366a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.218877 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.219404 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484750-jzzch" event={"ID":"7aba8941-2ebf-4bf6-94ab-a1b999b2366a","Type":"ContainerDied","Data":"960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3"} Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.219472 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="960d091e38ad7051f2aead273bce78e91011adf476eba8dfa0058d52b101cef3" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.265155 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.271165 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484705-g489w"] Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.502503 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.642851 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") pod \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\" (UID: \"3f073b85-c7cf-489a-8e89-7bf6bc9a2124\") " Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.651564 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf" (OuterVolumeSpecName: "kube-api-access-mtjsf") pod "3f073b85-c7cf-489a-8e89-7bf6bc9a2124" (UID: "3f073b85-c7cf-489a-8e89-7bf6bc9a2124"). InnerVolumeSpecName "kube-api-access-mtjsf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:30:04 crc kubenswrapper[5120]: I0122 12:30:04.744886 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mtjsf\" (UniqueName: \"kubernetes.io/projected/3f073b85-c7cf-489a-8e89-7bf6bc9a2124-kube-api-access-mtjsf\") on node \"crc\" DevicePath \"\"" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.231055 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" event={"ID":"3f073b85-c7cf-489a-8e89-7bf6bc9a2124","Type":"ContainerDied","Data":"954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc"} Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.231158 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="954bb1e516fab9f4b414aac9ddaa786139a9c43c4e1ab0c6093badfd49c7c8bc" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.231069 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484750-sqt7t" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.596211 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2667e960-0d1a-4c78-97ea-b1852f27ce17" path="/var/lib/kubelet/pods/2667e960-0d1a-4c78-97ea-b1852f27ce17/volumes" Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.597573 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:30:05 crc kubenswrapper[5120]: I0122 12:30:05.598771 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484744-7g58z"] Jan 22 12:30:07 crc kubenswrapper[5120]: I0122 12:30:07.588193 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cde2753-8f27-404a-8fbc-d297e718b3b8" path="/var/lib/kubelet/pods/9cde2753-8f27-404a-8fbc-d297e718b3b8/volumes" Jan 22 12:30:09 crc kubenswrapper[5120]: I0122 12:30:09.574664 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:09 crc kubenswrapper[5120]: E0122 12:30:09.575153 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:23 crc kubenswrapper[5120]: I0122 12:30:23.573377 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:23 crc kubenswrapper[5120]: E0122 12:30:23.574862 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:35 crc kubenswrapper[5120]: I0122 12:30:35.595548 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:35 crc kubenswrapper[5120]: E0122 12:30:35.596780 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:30:50 crc kubenswrapper[5120]: I0122 12:30:50.572281 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:30:50 crc kubenswrapper[5120]: E0122 12:30:50.573202 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:02 crc kubenswrapper[5120]: I0122 12:31:02.786395 5120 scope.go:117] "RemoveContainer" containerID="639c5a6f329d80d432312ff72463fef5484bc1f4f6098a9e08e4b8cc0e600243" Jan 22 12:31:02 crc kubenswrapper[5120]: I0122 12:31:02.816871 5120 scope.go:117] "RemoveContainer" containerID="265c28387fd25a8a35e27895239a66ae8d41b785dc39bc594bbfbfd15a6f5f83" Jan 22 12:31:05 crc kubenswrapper[5120]: I0122 12:31:05.585853 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:05 crc kubenswrapper[5120]: E0122 12:31:05.586332 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:16 crc kubenswrapper[5120]: I0122 12:31:16.572680 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:16 crc kubenswrapper[5120]: E0122 12:31:16.574040 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:31 crc kubenswrapper[5120]: I0122 12:31:31.572911 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:31 crc kubenswrapper[5120]: E0122 12:31:31.574315 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.642484 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644461 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerName="oc" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644484 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerName="oc" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644503 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerName="collect-profiles" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644518 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerName="collect-profiles" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644792 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" containerName="oc" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.644817 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7aba8941-2ebf-4bf6-94ab-a1b999b2366a" containerName="collect-profiles" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.664336 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.678747 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.801844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.802389 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.802513 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.904635 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.904737 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.904842 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.906688 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.907427 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.935084 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"redhat-operators-pt2lk\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:44 crc kubenswrapper[5120]: I0122 12:31:44.997560 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.263797 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.268891 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.315023 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerStarted","Data":"e59ad32aa35941275f930ff7acca21c531282b3c771108f6369332b62762c5cc"} Jan 22 12:31:45 crc kubenswrapper[5120]: I0122 12:31:45.581778 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:45 crc kubenswrapper[5120]: E0122 12:31:45.582454 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:46 crc kubenswrapper[5120]: I0122 12:31:46.331336 5120 generic.go:358] "Generic (PLEG): container finished" podID="82866e94-add3-43ee-890e-d133e4f2c590" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" exitCode=0 Jan 22 12:31:46 crc kubenswrapper[5120]: I0122 12:31:46.331528 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7"} Jan 22 12:31:47 crc kubenswrapper[5120]: I0122 12:31:47.344156 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerStarted","Data":"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1"} Jan 22 12:31:48 crc kubenswrapper[5120]: I0122 12:31:48.359181 5120 generic.go:358] "Generic (PLEG): container finished" podID="82866e94-add3-43ee-890e-d133e4f2c590" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" exitCode=0 Jan 22 12:31:48 crc kubenswrapper[5120]: I0122 12:31:48.359875 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1"} Jan 22 12:31:49 crc kubenswrapper[5120]: I0122 12:31:49.371209 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerStarted","Data":"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4"} Jan 22 12:31:49 crc kubenswrapper[5120]: I0122 12:31:49.398639 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pt2lk" podStartSLOduration=4.690976065 podStartE2EDuration="5.398611713s" podCreationTimestamp="2026-01-22 12:31:44 +0000 UTC" firstStartedPulling="2026-01-22 12:31:46.334116581 +0000 UTC m=+2641.078064962" lastFinishedPulling="2026-01-22 12:31:47.041752269 +0000 UTC m=+2641.785700610" observedRunningTime="2026-01-22 12:31:49.388562642 +0000 UTC m=+2644.132511023" watchObservedRunningTime="2026-01-22 12:31:49.398611713 +0000 UTC m=+2644.142560074" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.010227 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.794178 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.794617 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.872342 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.872540 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.872817 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.974448 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.974591 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.974680 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.977229 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.977537 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.999536 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:54 crc kubenswrapper[5120]: I0122 12:31:54.999717 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.019190 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"certified-operators-7qptq\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.084829 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.134843 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.488008 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:55 crc kubenswrapper[5120]: I0122 12:31:55.505968 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:31:55 crc kubenswrapper[5120]: W0122 12:31:55.518582 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad7e51cc_e89a_4bed_b500_3b766d041fd7.slice/crio-0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014 WatchSource:0}: Error finding container 0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014: Status 404 returned error can't find the container with id 0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014 Jan 22 12:31:56 crc kubenswrapper[5120]: I0122 12:31:56.435822 5120 generic.go:358] "Generic (PLEG): container finished" podID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" exitCode=0 Jan 22 12:31:56 crc kubenswrapper[5120]: I0122 12:31:56.436017 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851"} Jan 22 12:31:56 crc kubenswrapper[5120]: I0122 12:31:56.436727 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerStarted","Data":"0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014"} Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.437880 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.456358 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pt2lk" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" containerID="cri-o://69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" gracePeriod=2 Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.572733 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:31:57 crc kubenswrapper[5120]: E0122 12:31:57.573173 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.876435 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.945892 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") pod \"82866e94-add3-43ee-890e-d133e4f2c590\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.946495 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") pod \"82866e94-add3-43ee-890e-d133e4f2c590\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.946602 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") pod \"82866e94-add3-43ee-890e-d133e4f2c590\" (UID: \"82866e94-add3-43ee-890e-d133e4f2c590\") " Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.947531 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities" (OuterVolumeSpecName: "utilities") pod "82866e94-add3-43ee-890e-d133e4f2c590" (UID: "82866e94-add3-43ee-890e-d133e4f2c590"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:31:57 crc kubenswrapper[5120]: I0122 12:31:57.955202 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf" (OuterVolumeSpecName: "kube-api-access-nb7rf") pod "82866e94-add3-43ee-890e-d133e4f2c590" (UID: "82866e94-add3-43ee-890e-d133e4f2c590"). InnerVolumeSpecName "kube-api-access-nb7rf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.049503 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.049550 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nb7rf\" (UniqueName: \"kubernetes.io/projected/82866e94-add3-43ee-890e-d133e4f2c590-kube-api-access-nb7rf\") on node \"crc\" DevicePath \"\"" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469357 5120 generic.go:358] "Generic (PLEG): container finished" podID="82866e94-add3-43ee-890e-d133e4f2c590" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" exitCode=0 Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469504 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4"} Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469538 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt2lk" event={"ID":"82866e94-add3-43ee-890e-d133e4f2c590","Type":"ContainerDied","Data":"e59ad32aa35941275f930ff7acca21c531282b3c771108f6369332b62762c5cc"} Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469558 5120 scope.go:117] "RemoveContainer" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.469831 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt2lk" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.473477 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerStarted","Data":"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21"} Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.509495 5120 scope.go:117] "RemoveContainer" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.551355 5120 scope.go:117] "RemoveContainer" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.600904 5120 scope.go:117] "RemoveContainer" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" Jan 22 12:31:58 crc kubenswrapper[5120]: E0122 12:31:58.601399 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4\": container with ID starting with 69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4 not found: ID does not exist" containerID="69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601430 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4"} err="failed to get container status \"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4\": rpc error: code = NotFound desc = could not find container \"69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4\": container with ID starting with 69957a67c75d05747835227cfeb8100fff5139cc38f2b6df1e28a263f7b26fc4 not found: ID does not exist" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601450 5120 scope.go:117] "RemoveContainer" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" Jan 22 12:31:58 crc kubenswrapper[5120]: E0122 12:31:58.601832 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1\": container with ID starting with 32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1 not found: ID does not exist" containerID="32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601858 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1"} err="failed to get container status \"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1\": rpc error: code = NotFound desc = could not find container \"32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1\": container with ID starting with 32debb225fe995d716ced448b4d9c613fd405cba10932dc52bcfc0744404f5d1 not found: ID does not exist" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.601873 5120 scope.go:117] "RemoveContainer" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" Jan 22 12:31:58 crc kubenswrapper[5120]: E0122 12:31:58.602176 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7\": container with ID starting with c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7 not found: ID does not exist" containerID="c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7" Jan 22 12:31:58 crc kubenswrapper[5120]: I0122 12:31:58.602203 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7"} err="failed to get container status \"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7\": rpc error: code = NotFound desc = could not find container \"c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7\": container with ID starting with c57db96e52652762295d886204315fc7dc4b7bc944c38313f364d060900a88b7 not found: ID does not exist" Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.492326 5120 generic.go:358] "Generic (PLEG): container finished" podID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" exitCode=0 Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.492506 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21"} Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.938235 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82866e94-add3-43ee-890e-d133e4f2c590" (UID: "82866e94-add3-43ee-890e-d133e4f2c590"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:31:59 crc kubenswrapper[5120]: I0122 12:31:59.979449 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82866e94-add3-43ee-890e-d133e4f2c590-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.072727 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.077991 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pt2lk"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.176842 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178068 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-utilities" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178096 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-utilities" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178125 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-content" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178132 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="extract-content" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178148 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178160 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.178323 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="82866e94-add3-43ee-890e-d133e4f2c590" containerName="registry-server" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.183248 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.183374 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.187078 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.187482 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.188330 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.283999 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"auto-csr-approver-29484752-v5hcs\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.385201 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"auto-csr-approver-29484752-v5hcs\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.425552 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"auto-csr-approver-29484752-v5hcs\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.503220 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerStarted","Data":"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591"} Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.508451 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:00 crc kubenswrapper[5120]: I0122 12:32:00.532472 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7qptq" podStartSLOduration=6.6139342469999995 podStartE2EDuration="7.532449603s" podCreationTimestamp="2026-01-22 12:31:53 +0000 UTC" firstStartedPulling="2026-01-22 12:31:56.437029494 +0000 UTC m=+2651.180977845" lastFinishedPulling="2026-01-22 12:31:57.35554485 +0000 UTC m=+2652.099493201" observedRunningTime="2026-01-22 12:32:00.5273594 +0000 UTC m=+2655.271307781" watchObservedRunningTime="2026-01-22 12:32:00.532449603 +0000 UTC m=+2655.276397974" Jan 22 12:32:01 crc kubenswrapper[5120]: I0122 12:32:01.000480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:32:01 crc kubenswrapper[5120]: W0122 12:32:01.010187 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7943991c_5c7d_4a50_80ac_42d7eb0f624f.slice/crio-baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862 WatchSource:0}: Error finding container baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862: Status 404 returned error can't find the container with id baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862 Jan 22 12:32:01 crc kubenswrapper[5120]: I0122 12:32:01.514465 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerStarted","Data":"baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862"} Jan 22 12:32:01 crc kubenswrapper[5120]: I0122 12:32:01.594182 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82866e94-add3-43ee-890e-d133e4f2c590" path="/var/lib/kubelet/pods/82866e94-add3-43ee-890e-d133e4f2c590/volumes" Jan 22 12:32:02 crc kubenswrapper[5120]: I0122 12:32:02.528240 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerStarted","Data":"2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a"} Jan 22 12:32:02 crc kubenswrapper[5120]: I0122 12:32:02.547790 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" podStartSLOduration=1.5913614630000001 podStartE2EDuration="2.547771749s" podCreationTimestamp="2026-01-22 12:32:00 +0000 UTC" firstStartedPulling="2026-01-22 12:32:01.012118272 +0000 UTC m=+2655.756066633" lastFinishedPulling="2026-01-22 12:32:01.968528578 +0000 UTC m=+2656.712476919" observedRunningTime="2026-01-22 12:32:02.542509603 +0000 UTC m=+2657.286457944" watchObservedRunningTime="2026-01-22 12:32:02.547771749 +0000 UTC m=+2657.291720090" Jan 22 12:32:03 crc kubenswrapper[5120]: I0122 12:32:03.542120 5120 generic.go:358] "Generic (PLEG): container finished" podID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerID="2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a" exitCode=0 Jan 22 12:32:03 crc kubenswrapper[5120]: I0122 12:32:03.542376 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerDied","Data":"2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a"} Jan 22 12:32:04 crc kubenswrapper[5120]: I0122 12:32:04.865156 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:04 crc kubenswrapper[5120]: I0122 12:32:04.972495 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") pod \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\" (UID: \"7943991c-5c7d-4a50-80ac-42d7eb0f624f\") " Jan 22 12:32:04 crc kubenswrapper[5120]: I0122 12:32:04.998385 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh" (OuterVolumeSpecName: "kube-api-access-cwbsh") pod "7943991c-5c7d-4a50-80ac-42d7eb0f624f" (UID: "7943991c-5c7d-4a50-80ac-42d7eb0f624f"). InnerVolumeSpecName "kube-api-access-cwbsh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.073801 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cwbsh\" (UniqueName: \"kubernetes.io/projected/7943991c-5c7d-4a50-80ac-42d7eb0f624f-kube-api-access-cwbsh\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.135744 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.135811 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.205913 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.590479 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.600465 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484752-v5hcs" event={"ID":"7943991c-5c7d-4a50-80ac-42d7eb0f624f","Type":"ContainerDied","Data":"baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862"} Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.600613 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf2c1ea70012e416695caee6a6a75d866e409974794706c6d607ebe771cb862" Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.610980 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.618400 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484746-xsmfp"] Jan 22 12:32:05 crc kubenswrapper[5120]: I0122 12:32:05.661899 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:07 crc kubenswrapper[5120]: I0122 12:32:07.405061 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:32:07 crc kubenswrapper[5120]: I0122 12:32:07.581869 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cba6b62-807f-4e37-b350-bc4eef13747b" path="/var/lib/kubelet/pods/7cba6b62-807f-4e37-b350-bc4eef13747b/volumes" Jan 22 12:32:07 crc kubenswrapper[5120]: I0122 12:32:07.612626 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7qptq" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" containerID="cri-o://f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" gracePeriod=2 Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.046272 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.236135 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") pod \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.236301 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") pod \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.236347 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") pod \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\" (UID: \"ad7e51cc-e89a-4bed-b500-3b766d041fd7\") " Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.240208 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities" (OuterVolumeSpecName: "utilities") pod "ad7e51cc-e89a-4bed-b500-3b766d041fd7" (UID: "ad7e51cc-e89a-4bed-b500-3b766d041fd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.246794 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw" (OuterVolumeSpecName: "kube-api-access-p8qbw") pod "ad7e51cc-e89a-4bed-b500-3b766d041fd7" (UID: "ad7e51cc-e89a-4bed-b500-3b766d041fd7"). InnerVolumeSpecName "kube-api-access-p8qbw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.281872 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad7e51cc-e89a-4bed-b500-3b766d041fd7" (UID: "ad7e51cc-e89a-4bed-b500-3b766d041fd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.338354 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8qbw\" (UniqueName: \"kubernetes.io/projected/ad7e51cc-e89a-4bed-b500-3b766d041fd7-kube-api-access-p8qbw\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.338393 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.338404 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad7e51cc-e89a-4bed-b500-3b766d041fd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639048 5120 generic.go:358] "Generic (PLEG): container finished" podID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" exitCode=0 Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639629 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591"} Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639674 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qptq" event={"ID":"ad7e51cc-e89a-4bed-b500-3b766d041fd7","Type":"ContainerDied","Data":"0641f5f4426774a5da4d88ad3c9e9c1a6f008cf6829b9ca41897077c4da4f014"} Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639706 5120 scope.go:117] "RemoveContainer" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.639917 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qptq" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.671610 5120 scope.go:117] "RemoveContainer" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.707099 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.714525 5120 scope.go:117] "RemoveContainer" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.720137 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7qptq"] Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.746923 5120 scope.go:117] "RemoveContainer" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" Jan 22 12:32:08 crc kubenswrapper[5120]: E0122 12:32:08.747374 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591\": container with ID starting with f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591 not found: ID does not exist" containerID="f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747418 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591"} err="failed to get container status \"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591\": rpc error: code = NotFound desc = could not find container \"f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591\": container with ID starting with f8456a4c00892b86634e946891f5b1461e175ce5fd4694c98114475e50788591 not found: ID does not exist" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747445 5120 scope.go:117] "RemoveContainer" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" Jan 22 12:32:08 crc kubenswrapper[5120]: E0122 12:32:08.747689 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21\": container with ID starting with 134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21 not found: ID does not exist" containerID="134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747729 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21"} err="failed to get container status \"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21\": rpc error: code = NotFound desc = could not find container \"134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21\": container with ID starting with 134af30ce92c4197ef4b359c4bc8c40b396cd3d79ca40996dd5797f8eff9af21 not found: ID does not exist" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.747753 5120 scope.go:117] "RemoveContainer" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" Jan 22 12:32:08 crc kubenswrapper[5120]: E0122 12:32:08.748590 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851\": container with ID starting with 4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851 not found: ID does not exist" containerID="4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851" Jan 22 12:32:08 crc kubenswrapper[5120]: I0122 12:32:08.748634 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851"} err="failed to get container status \"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851\": rpc error: code = NotFound desc = could not find container \"4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851\": container with ID starting with 4f8545fffde05edbe270181f69222f0db3f0b8b5dcc136e595275eb8016ac851 not found: ID does not exist" Jan 22 12:32:09 crc kubenswrapper[5120]: I0122 12:32:09.580931 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" path="/var/lib/kubelet/pods/ad7e51cc-e89a-4bed-b500-3b766d041fd7/volumes" Jan 22 12:32:11 crc kubenswrapper[5120]: I0122 12:32:11.586168 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:11 crc kubenswrapper[5120]: E0122 12:32:11.586827 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:25 crc kubenswrapper[5120]: I0122 12:32:25.581523 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:25 crc kubenswrapper[5120]: E0122 12:32:25.585322 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:37 crc kubenswrapper[5120]: I0122 12:32:37.580233 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:37 crc kubenswrapper[5120]: E0122 12:32:37.580762 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:49 crc kubenswrapper[5120]: I0122 12:32:49.588126 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:32:49 crc kubenswrapper[5120]: E0122 12:32:49.589291 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.764278 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.768086 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.778272 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:32:51 crc kubenswrapper[5120]: I0122 12:32:51.780170 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:33:00 crc kubenswrapper[5120]: I0122 12:33:00.572722 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:00 crc kubenswrapper[5120]: E0122 12:33:00.574320 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:03 crc kubenswrapper[5120]: I0122 12:33:03.046066 5120 scope.go:117] "RemoveContainer" containerID="6b1f924c30425523c67b96c181b3a024d387e63c67e79368cd5fa28556694ba7" Jan 22 12:33:15 crc kubenswrapper[5120]: I0122 12:33:15.586166 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:15 crc kubenswrapper[5120]: E0122 12:33:15.587667 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:26 crc kubenswrapper[5120]: I0122 12:33:26.572692 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:26 crc kubenswrapper[5120]: E0122 12:33:26.574003 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:40 crc kubenswrapper[5120]: I0122 12:33:40.572300 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:40 crc kubenswrapper[5120]: E0122 12:33:40.573304 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:33:51 crc kubenswrapper[5120]: I0122 12:33:51.572737 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:33:51 crc kubenswrapper[5120]: E0122 12:33:51.574030 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.155124 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157177 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-utilities" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157206 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-utilities" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157269 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157282 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157304 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-content" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157316 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="extract-content" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157349 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerName="oc" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157362 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerName="oc" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157562 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" containerName="oc" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.157591 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="ad7e51cc-e89a-4bed-b500-3b766d041fd7" containerName="registry-server" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.175864 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.176112 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.178494 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.179531 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.180552 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.375035 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"auto-csr-approver-29484754-fgcqw\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.476879 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"auto-csr-approver-29484754-fgcqw\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.501861 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"auto-csr-approver-29484754-fgcqw\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.508521 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.751139 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:34:00 crc kubenswrapper[5120]: I0122 12:34:00.858765 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" event={"ID":"18ea5adf-2b29-46ff-8c49-515dd1615879","Type":"ContainerStarted","Data":"30fe1094d8840610048424f95f1d95c399d602663b392ffad2f6d142d13ecb46"} Jan 22 12:34:02 crc kubenswrapper[5120]: I0122 12:34:02.877749 5120 generic.go:358] "Generic (PLEG): container finished" podID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerID="f0addba7235b3cf2978323be2668d57256d6e16bc46c625f5d2101670fd5355e" exitCode=0 Jan 22 12:34:02 crc kubenswrapper[5120]: I0122 12:34:02.878285 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" event={"ID":"18ea5adf-2b29-46ff-8c49-515dd1615879","Type":"ContainerDied","Data":"f0addba7235b3cf2978323be2668d57256d6e16bc46c625f5d2101670fd5355e"} Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.233911 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.334391 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") pod \"18ea5adf-2b29-46ff-8c49-515dd1615879\" (UID: \"18ea5adf-2b29-46ff-8c49-515dd1615879\") " Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.358144 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj" (OuterVolumeSpecName: "kube-api-access-tldcj") pod "18ea5adf-2b29-46ff-8c49-515dd1615879" (UID: "18ea5adf-2b29-46ff-8c49-515dd1615879"). InnerVolumeSpecName "kube-api-access-tldcj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.436768 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tldcj\" (UniqueName: \"kubernetes.io/projected/18ea5adf-2b29-46ff-8c49-515dd1615879-kube-api-access-tldcj\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.899113 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.899170 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484754-fgcqw" event={"ID":"18ea5adf-2b29-46ff-8c49-515dd1615879","Type":"ContainerDied","Data":"30fe1094d8840610048424f95f1d95c399d602663b392ffad2f6d142d13ecb46"} Jan 22 12:34:04 crc kubenswrapper[5120]: I0122 12:34:04.899221 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fe1094d8840610048424f95f1d95c399d602663b392ffad2f6d142d13ecb46" Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.344336 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.356482 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484748-kjqpj"] Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.584933 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e6cf90-948d-4188-8603-4f42f5a2400e" path="/var/lib/kubelet/pods/19e6cf90-948d-4188-8603-4f42f5a2400e/volumes" Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.586605 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:34:05 crc kubenswrapper[5120]: I0122 12:34:05.912638 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca"} Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.886168 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.887876 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerName="oc" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.887899 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerName="oc" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.888727 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" containerName="oc" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.895723 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:14 crc kubenswrapper[5120]: I0122 12:34:14.918929 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.046746 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.046948 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.047077 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148265 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148361 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.148857 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.149144 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.171863 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"community-operators-4r8gt\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.274364 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:15 crc kubenswrapper[5120]: I0122 12:34:15.749348 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:15 crc kubenswrapper[5120]: W0122 12:34:15.767036 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f4d8fb_8397_476e_8903_7e5968484c8d.slice/crio-3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6 WatchSource:0}: Error finding container 3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6: Status 404 returned error can't find the container with id 3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6 Jan 22 12:34:16 crc kubenswrapper[5120]: I0122 12:34:16.018408 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" exitCode=0 Jan 22 12:34:16 crc kubenswrapper[5120]: I0122 12:34:16.018534 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812"} Jan 22 12:34:16 crc kubenswrapper[5120]: I0122 12:34:16.018575 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerStarted","Data":"3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6"} Jan 22 12:34:17 crc kubenswrapper[5120]: I0122 12:34:17.029313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerStarted","Data":"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033"} Jan 22 12:34:18 crc kubenswrapper[5120]: I0122 12:34:18.052333 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" exitCode=0 Jan 22 12:34:18 crc kubenswrapper[5120]: I0122 12:34:18.052559 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033"} Jan 22 12:34:19 crc kubenswrapper[5120]: I0122 12:34:19.064754 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerStarted","Data":"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0"} Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.275213 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.277717 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.341589 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:25 crc kubenswrapper[5120]: I0122 12:34:25.386044 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4r8gt" podStartSLOduration=10.801212412 podStartE2EDuration="11.386009847s" podCreationTimestamp="2026-01-22 12:34:14 +0000 UTC" firstStartedPulling="2026-01-22 12:34:16.021287656 +0000 UTC m=+2790.765236037" lastFinishedPulling="2026-01-22 12:34:16.606085101 +0000 UTC m=+2791.350033472" observedRunningTime="2026-01-22 12:34:19.088850112 +0000 UTC m=+2793.832798463" watchObservedRunningTime="2026-01-22 12:34:25.386009847 +0000 UTC m=+2800.129958248" Jan 22 12:34:26 crc kubenswrapper[5120]: I0122 12:34:26.205814 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:26 crc kubenswrapper[5120]: I0122 12:34:26.258159 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:28 crc kubenswrapper[5120]: I0122 12:34:28.156559 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4r8gt" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" containerID="cri-o://0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" gracePeriod=2 Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.048404 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165700 5120 generic.go:358] "Generic (PLEG): container finished" podID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" exitCode=0 Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165743 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0"} Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165791 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r8gt" event={"ID":"d3f4d8fb-8397-476e-8903-7e5968484c8d","Type":"ContainerDied","Data":"3181195e7a0d7600212f0c844db0ed360b1c6eb7ff74543f2c8d582e0854b6d6"} Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165815 5120 scope.go:117] "RemoveContainer" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.165834 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r8gt" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.190316 5120 scope.go:117] "RemoveContainer" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.211672 5120 scope.go:117] "RemoveContainer" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.230992 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") pod \"d3f4d8fb-8397-476e-8903-7e5968484c8d\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.231072 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") pod \"d3f4d8fb-8397-476e-8903-7e5968484c8d\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.231284 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") pod \"d3f4d8fb-8397-476e-8903-7e5968484c8d\" (UID: \"d3f4d8fb-8397-476e-8903-7e5968484c8d\") " Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.232795 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities" (OuterVolumeSpecName: "utilities") pod "d3f4d8fb-8397-476e-8903-7e5968484c8d" (UID: "d3f4d8fb-8397-476e-8903-7e5968484c8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.241276 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc" (OuterVolumeSpecName: "kube-api-access-tjlpc") pod "d3f4d8fb-8397-476e-8903-7e5968484c8d" (UID: "d3f4d8fb-8397-476e-8903-7e5968484c8d"). InnerVolumeSpecName "kube-api-access-tjlpc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.285479 5120 scope.go:117] "RemoveContainer" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.285905 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0\": container with ID starting with 0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0 not found: ID does not exist" containerID="0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.285986 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0"} err="failed to get container status \"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0\": rpc error: code = NotFound desc = could not find container \"0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0\": container with ID starting with 0241f1c43a588c15860e0e90f0860c84485bd91f1b4b322c04cf94c13d5eddf0 not found: ID does not exist" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286016 5120 scope.go:117] "RemoveContainer" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.286375 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033\": container with ID starting with bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033 not found: ID does not exist" containerID="bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286406 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033"} err="failed to get container status \"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033\": rpc error: code = NotFound desc = could not find container \"bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033\": container with ID starting with bc1c304dba40ea6c1f51b91a5a191cf01626bbc05f10ea5b13f3abdba4bbb033 not found: ID does not exist" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286429 5120 scope.go:117] "RemoveContainer" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.286644 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812\": container with ID starting with f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812 not found: ID does not exist" containerID="f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.286680 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812"} err="failed to get container status \"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812\": rpc error: code = NotFound desc = could not find container \"f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812\": container with ID starting with f5da2b4573f411fd1c7ba90d5135b3b3ad5e589b9d33a895f84d5f73a9a42812 not found: ID does not exist" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.291401 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3f4d8fb-8397-476e-8903-7e5968484c8d" (UID: "d3f4d8fb-8397-476e-8903-7e5968484c8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.333104 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.333142 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tjlpc\" (UniqueName: \"kubernetes.io/projected/d3f4d8fb-8397-476e-8903-7e5968484c8d-kube-api-access-tjlpc\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.333159 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f4d8fb-8397-476e-8903-7e5968484c8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.520115 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.531329 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4r8gt"] Jan 22 12:34:29 crc kubenswrapper[5120]: E0122 12:34:29.573322 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f4d8fb_8397_476e_8903_7e5968484c8d.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:34:29 crc kubenswrapper[5120]: I0122 12:34:29.580656 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" path="/var/lib/kubelet/pods/d3f4d8fb-8397-476e-8903-7e5968484c8d/volumes" Jan 22 12:35:03 crc kubenswrapper[5120]: I0122 12:35:03.247928 5120 scope.go:117] "RemoveContainer" containerID="87dcaa48bc692cdf9ab6041cfa08659e3160bb8e1c6b034284ede8cacd86f655" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.148465 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150019 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-utilities" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150039 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-utilities" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150065 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150072 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150105 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-content" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150113 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="extract-content" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.150292 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3f4d8fb-8397-476e-8903-7e5968484c8d" containerName="registry-server" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.156750 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.156857 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.159462 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.159477 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.160409 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.208018 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"auto-csr-approver-29484756-svdvw\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.309188 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"auto-csr-approver-29484756-svdvw\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.337071 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"auto-csr-approver-29484756-svdvw\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.475799 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:00 crc kubenswrapper[5120]: I0122 12:36:00.720515 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:36:01 crc kubenswrapper[5120]: I0122 12:36:01.059289 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484756-svdvw" event={"ID":"823e7d1b-b74d-47c1-967a-fc44dab160b8","Type":"ContainerStarted","Data":"65cc54aa3a190682b61b6ca44a82953d653cbf0b0fd1e50cc9a0f84c99a6b5e6"} Jan 22 12:36:03 crc kubenswrapper[5120]: I0122 12:36:03.077651 5120 generic.go:358] "Generic (PLEG): container finished" podID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerID="91b445bc688764113fdba4792727a51c31d4ee1ea49e151d6ba316bfc799e5a0" exitCode=0 Jan 22 12:36:03 crc kubenswrapper[5120]: I0122 12:36:03.078210 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484756-svdvw" event={"ID":"823e7d1b-b74d-47c1-967a-fc44dab160b8","Type":"ContainerDied","Data":"91b445bc688764113fdba4792727a51c31d4ee1ea49e151d6ba316bfc799e5a0"} Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.400785 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.497626 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") pod \"823e7d1b-b74d-47c1-967a-fc44dab160b8\" (UID: \"823e7d1b-b74d-47c1-967a-fc44dab160b8\") " Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.507511 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n" (OuterVolumeSpecName: "kube-api-access-sjh4n") pod "823e7d1b-b74d-47c1-967a-fc44dab160b8" (UID: "823e7d1b-b74d-47c1-967a-fc44dab160b8"). InnerVolumeSpecName "kube-api-access-sjh4n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:36:04 crc kubenswrapper[5120]: I0122 12:36:04.600317 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjh4n\" (UniqueName: \"kubernetes.io/projected/823e7d1b-b74d-47c1-967a-fc44dab160b8-kube-api-access-sjh4n\") on node \"crc\" DevicePath \"\"" Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.115430 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484756-svdvw" event={"ID":"823e7d1b-b74d-47c1-967a-fc44dab160b8","Type":"ContainerDied","Data":"65cc54aa3a190682b61b6ca44a82953d653cbf0b0fd1e50cc9a0f84c99a6b5e6"} Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.115494 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65cc54aa3a190682b61b6ca44a82953d653cbf0b0fd1e50cc9a0f84c99a6b5e6" Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.115487 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484756-svdvw" Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.464194 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.475056 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484750-sqt7t"] Jan 22 12:36:05 crc kubenswrapper[5120]: I0122 12:36:05.581310 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f073b85-c7cf-489a-8e89-7bf6bc9a2124" path="/var/lib/kubelet/pods/3f073b85-c7cf-489a-8e89-7bf6bc9a2124/volumes" Jan 22 12:36:31 crc kubenswrapper[5120]: I0122 12:36:31.972943 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:36:31 crc kubenswrapper[5120]: I0122 12:36:31.973732 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:01 crc kubenswrapper[5120]: I0122 12:37:01.972977 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:37:01 crc kubenswrapper[5120]: I0122 12:37:01.973567 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:03 crc kubenswrapper[5120]: I0122 12:37:03.443259 5120 scope.go:117] "RemoveContainer" containerID="e252cf75f043a8d827ee19582fab16cdd6e6b640af539cb8d97f2f626b48055f" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.972924 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.974270 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.974349 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.975524 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:37:31 crc kubenswrapper[5120]: I0122 12:37:31.975668 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca" gracePeriod=600 Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.116335 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.971661 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca" exitCode=0 Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.971813 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca"} Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.972314 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626"} Jan 22 12:37:32 crc kubenswrapper[5120]: I0122 12:37:32.972358 5120 scope.go:117] "RemoveContainer" containerID="f2643d6719d898899b7fe441e6374794306e1af141db7ee92ac8d42af384da07" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.901185 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.902853 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.932943 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:37:51 crc kubenswrapper[5120]: I0122 12:37:51.936642 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.149460 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.151460 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerName="oc" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.151482 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerName="oc" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.151726 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" containerName="oc" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.224752 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.224886 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.227335 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.227644 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.228066 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.316594 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"auto-csr-approver-29484758-hjfwt\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.418270 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"auto-csr-approver-29484758-hjfwt\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.451351 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"auto-csr-approver-29484758-hjfwt\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.548879 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:00 crc kubenswrapper[5120]: I0122 12:38:00.777646 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:38:00 crc kubenswrapper[5120]: W0122 12:38:00.784297 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96479bdf_524d_44cf_84b0_0be4a402a317.slice/crio-a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193 WatchSource:0}: Error finding container a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193: Status 404 returned error can't find the container with id a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193 Jan 22 12:38:01 crc kubenswrapper[5120]: I0122 12:38:01.222671 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" event={"ID":"96479bdf-524d-44cf-84b0-0be4a402a317","Type":"ContainerStarted","Data":"a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193"} Jan 22 12:38:03 crc kubenswrapper[5120]: I0122 12:38:03.248550 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" event={"ID":"96479bdf-524d-44cf-84b0-0be4a402a317","Type":"ContainerDied","Data":"7a805c4c05ced5399f3bf914b6a245f885524c5c4ac80c4ac8f87f8faa63c41b"} Jan 22 12:38:03 crc kubenswrapper[5120]: I0122 12:38:03.248433 5120 generic.go:358] "Generic (PLEG): container finished" podID="96479bdf-524d-44cf-84b0-0be4a402a317" containerID="7a805c4c05ced5399f3bf914b6a245f885524c5c4ac80c4ac8f87f8faa63c41b" exitCode=0 Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.595146 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.617165 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") pod \"96479bdf-524d-44cf-84b0-0be4a402a317\" (UID: \"96479bdf-524d-44cf-84b0-0be4a402a317\") " Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.623172 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8" (OuterVolumeSpecName: "kube-api-access-f4vv8") pod "96479bdf-524d-44cf-84b0-0be4a402a317" (UID: "96479bdf-524d-44cf-84b0-0be4a402a317"). InnerVolumeSpecName "kube-api-access-f4vv8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:38:04 crc kubenswrapper[5120]: I0122 12:38:04.718612 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4vv8\" (UniqueName: \"kubernetes.io/projected/96479bdf-524d-44cf-84b0-0be4a402a317-kube-api-access-f4vv8\") on node \"crc\" DevicePath \"\"" Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.269590 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.269617 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484758-hjfwt" event={"ID":"96479bdf-524d-44cf-84b0-0be4a402a317","Type":"ContainerDied","Data":"a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193"} Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.269679 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a927b4a72c5ce00f7af9192252ae6cb7cbb9c1b0fb1dbd61f05cbb3ce843f193" Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.670737 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:38:05 crc kubenswrapper[5120]: I0122 12:38:05.678242 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484752-v5hcs"] Jan 22 12:38:07 crc kubenswrapper[5120]: I0122 12:38:07.581595 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7943991c-5c7d-4a50-80ac-42d7eb0f624f" path="/var/lib/kubelet/pods/7943991c-5c7d-4a50-80ac-42d7eb0f624f/volumes" Jan 22 12:39:03 crc kubenswrapper[5120]: I0122 12:39:03.608865 5120 scope.go:117] "RemoveContainer" containerID="2871f40e4381a68e2190c46528c45a6f62b9393512cbac4263f64ed579203e6a" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.147937 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.150754 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" containerName="oc" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.150882 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" containerName="oc" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.151099 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" containerName="oc" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.203244 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.203424 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.208501 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.210072 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.210347 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.310870 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"auto-csr-approver-29484760-9gmsd\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.412801 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"auto-csr-approver-29484760-9gmsd\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.453296 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"auto-csr-approver-29484760-9gmsd\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.534253 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:00 crc kubenswrapper[5120]: I0122 12:40:00.764585 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:40:01 crc kubenswrapper[5120]: I0122 12:40:01.378380 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerStarted","Data":"d93948497a91b689d231c5bce65f008fcb9cc8daa4b86d583f5931af223f8b5d"} Jan 22 12:40:01 crc kubenswrapper[5120]: I0122 12:40:01.974666 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:40:01 crc kubenswrapper[5120]: I0122 12:40:01.974751 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:40:02 crc kubenswrapper[5120]: I0122 12:40:02.388090 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerStarted","Data":"d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26"} Jan 22 12:40:02 crc kubenswrapper[5120]: I0122 12:40:02.407928 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" podStartSLOduration=1.330582421 podStartE2EDuration="2.407908378s" podCreationTimestamp="2026-01-22 12:40:00 +0000 UTC" firstStartedPulling="2026-01-22 12:40:00.778256493 +0000 UTC m=+3135.522204834" lastFinishedPulling="2026-01-22 12:40:01.85558245 +0000 UTC m=+3136.599530791" observedRunningTime="2026-01-22 12:40:02.401612396 +0000 UTC m=+3137.145560757" watchObservedRunningTime="2026-01-22 12:40:02.407908378 +0000 UTC m=+3137.151856729" Jan 22 12:40:03 crc kubenswrapper[5120]: I0122 12:40:03.403081 5120 generic.go:358] "Generic (PLEG): container finished" podID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerID="d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26" exitCode=0 Jan 22 12:40:03 crc kubenswrapper[5120]: I0122 12:40:03.403312 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerDied","Data":"d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26"} Jan 22 12:40:04 crc kubenswrapper[5120]: I0122 12:40:04.776172 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:04 crc kubenswrapper[5120]: I0122 12:40:04.910022 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") pod \"eaee48fe-e9ab-42e2-926c-6d27414eec47\" (UID: \"eaee48fe-e9ab-42e2-926c-6d27414eec47\") " Jan 22 12:40:04 crc kubenswrapper[5120]: I0122 12:40:04.931504 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l" (OuterVolumeSpecName: "kube-api-access-tkb5l") pod "eaee48fe-e9ab-42e2-926c-6d27414eec47" (UID: "eaee48fe-e9ab-42e2-926c-6d27414eec47"). InnerVolumeSpecName "kube-api-access-tkb5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.012718 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkb5l\" (UniqueName: \"kubernetes.io/projected/eaee48fe-e9ab-42e2-926c-6d27414eec47-kube-api-access-tkb5l\") on node \"crc\" DevicePath \"\"" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.438027 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" event={"ID":"eaee48fe-e9ab-42e2-926c-6d27414eec47","Type":"ContainerDied","Data":"d93948497a91b689d231c5bce65f008fcb9cc8daa4b86d583f5931af223f8b5d"} Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.438385 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93948497a91b689d231c5bce65f008fcb9cc8daa4b86d583f5931af223f8b5d" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.438452 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484760-9gmsd" Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.492252 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.503899 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484754-fgcqw"] Jan 22 12:40:05 crc kubenswrapper[5120]: I0122 12:40:05.584722 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18ea5adf-2b29-46ff-8c49-515dd1615879" path="/var/lib/kubelet/pods/18ea5adf-2b29-46ff-8c49-515dd1615879/volumes" Jan 22 12:40:05 crc kubenswrapper[5120]: E0122 12:40:05.607365 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaee48fe_e9ab_42e2_926c_6d27414eec47.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:40:31 crc kubenswrapper[5120]: I0122 12:40:31.972550 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:40:31 crc kubenswrapper[5120]: I0122 12:40:31.973116 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.972903 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.973828 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.973903 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.974789 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:41:01 crc kubenswrapper[5120]: I0122 12:41:01.974848 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" gracePeriod=600 Jan 22 12:41:02 crc kubenswrapper[5120]: E0122 12:41:02.105113 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.986858 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" exitCode=0 Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.986926 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626"} Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.987032 5120 scope.go:117] "RemoveContainer" containerID="1b41c7747b82f18e38fe4a73127e6bf34587d1370adab02c57f7c18e148832ca" Jan 22 12:41:02 crc kubenswrapper[5120]: I0122 12:41:02.987658 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:02 crc kubenswrapper[5120]: E0122 12:41:02.988168 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:03 crc kubenswrapper[5120]: I0122 12:41:03.798436 5120 scope.go:117] "RemoveContainer" containerID="f0addba7235b3cf2978323be2668d57256d6e16bc46c625f5d2101670fd5355e" Jan 22 12:41:17 crc kubenswrapper[5120]: I0122 12:41:17.572646 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:17 crc kubenswrapper[5120]: E0122 12:41:17.574748 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:32 crc kubenswrapper[5120]: I0122 12:41:32.571512 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:32 crc kubenswrapper[5120]: E0122 12:41:32.572353 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:44 crc kubenswrapper[5120]: I0122 12:41:44.573264 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:44 crc kubenswrapper[5120]: E0122 12:41:44.574201 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:41:57 crc kubenswrapper[5120]: I0122 12:41:57.572823 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:41:57 crc kubenswrapper[5120]: E0122 12:41:57.574193 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.165765 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.167029 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerName="oc" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.167053 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerName="oc" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.167336 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" containerName="oc" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.173975 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.179037 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.179342 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.180470 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.182768 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.266747 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"auto-csr-approver-29484762-tjrcq\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.368902 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"auto-csr-approver-29484762-tjrcq\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.415495 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"auto-csr-approver-29484762-tjrcq\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.509081 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:00 crc kubenswrapper[5120]: I0122 12:42:00.812642 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:42:01 crc kubenswrapper[5120]: I0122 12:42:01.543943 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" event={"ID":"4579a92b-d731-4627-b131-998575817977","Type":"ContainerStarted","Data":"d91c96097f50254de6de87efea68a909d35cf8ebb31471dd58a494b632e595ee"} Jan 22 12:42:02 crc kubenswrapper[5120]: I0122 12:42:02.559610 5120 generic.go:358] "Generic (PLEG): container finished" podID="4579a92b-d731-4627-b131-998575817977" containerID="2a6f5b0d983a897bcecca87bafc7ac00eaf5f0a889d5650209a6e10cf38669b5" exitCode=0 Jan 22 12:42:02 crc kubenswrapper[5120]: I0122 12:42:02.559708 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" event={"ID":"4579a92b-d731-4627-b131-998575817977","Type":"ContainerDied","Data":"2a6f5b0d983a897bcecca87bafc7ac00eaf5f0a889d5650209a6e10cf38669b5"} Jan 22 12:42:03 crc kubenswrapper[5120]: I0122 12:42:03.819089 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:03 crc kubenswrapper[5120]: I0122 12:42:03.924369 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") pod \"4579a92b-d731-4627-b131-998575817977\" (UID: \"4579a92b-d731-4627-b131-998575817977\") " Jan 22 12:42:03 crc kubenswrapper[5120]: I0122 12:42:03.948546 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj" (OuterVolumeSpecName: "kube-api-access-vcrrj") pod "4579a92b-d731-4627-b131-998575817977" (UID: "4579a92b-d731-4627-b131-998575817977"). InnerVolumeSpecName "kube-api-access-vcrrj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.026826 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vcrrj\" (UniqueName: \"kubernetes.io/projected/4579a92b-d731-4627-b131-998575817977-kube-api-access-vcrrj\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.576505 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.576525 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484762-tjrcq" event={"ID":"4579a92b-d731-4627-b131-998575817977","Type":"ContainerDied","Data":"d91c96097f50254de6de87efea68a909d35cf8ebb31471dd58a494b632e595ee"} Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.577593 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91c96097f50254de6de87efea68a909d35cf8ebb31471dd58a494b632e595ee" Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.893878 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:42:04 crc kubenswrapper[5120]: I0122 12:42:04.899775 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484756-svdvw"] Jan 22 12:42:05 crc kubenswrapper[5120]: I0122 12:42:05.590788 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823e7d1b-b74d-47c1-967a-fc44dab160b8" path="/var/lib/kubelet/pods/823e7d1b-b74d-47c1-967a-fc44dab160b8/volumes" Jan 22 12:42:08 crc kubenswrapper[5120]: I0122 12:42:08.572387 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:08 crc kubenswrapper[5120]: E0122 12:42:08.573299 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.182947 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.184170 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4579a92b-d731-4627-b131-998575817977" containerName="oc" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.184184 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="4579a92b-d731-4627-b131-998575817977" containerName="oc" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.184303 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="4579a92b-d731-4627-b131-998575817977" containerName="oc" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.202457 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.202666 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.247556 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.247936 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.248149 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.350202 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.350515 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.350621 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.351497 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.351728 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.374661 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"redhat-operators-s2r5j\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:10 crc kubenswrapper[5120]: I0122 12:42:10.568705 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.034496 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.642045 5120 generic.go:358] "Generic (PLEG): container finished" podID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" exitCode=0 Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.642707 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727"} Jan 22 12:42:11 crc kubenswrapper[5120]: I0122 12:42:11.642746 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerStarted","Data":"274716f9fe261a0e6693c3f5dc6652a5f7b6c5afa3fae9c6eb706d245a939590"} Jan 22 12:42:13 crc kubenswrapper[5120]: I0122 12:42:13.663624 5120 generic.go:358] "Generic (PLEG): container finished" podID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" exitCode=0 Jan 22 12:42:13 crc kubenswrapper[5120]: I0122 12:42:13.663677 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9"} Jan 22 12:42:14 crc kubenswrapper[5120]: I0122 12:42:14.677598 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerStarted","Data":"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63"} Jan 22 12:42:14 crc kubenswrapper[5120]: I0122 12:42:14.711775 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s2r5j" podStartSLOduration=3.634390657 podStartE2EDuration="4.711742824s" podCreationTimestamp="2026-01-22 12:42:10 +0000 UTC" firstStartedPulling="2026-01-22 12:42:11.643728705 +0000 UTC m=+3266.387677046" lastFinishedPulling="2026-01-22 12:42:12.721080872 +0000 UTC m=+3267.465029213" observedRunningTime="2026-01-22 12:42:14.704576531 +0000 UTC m=+3269.448524882" watchObservedRunningTime="2026-01-22 12:42:14.711742824 +0000 UTC m=+3269.455691235" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.569942 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.570759 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.654249 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.797726 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:20 crc kubenswrapper[5120]: I0122 12:42:20.900933 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:22 crc kubenswrapper[5120]: I0122 12:42:22.748251 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s2r5j" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" containerID="cri-o://9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" gracePeriod=2 Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.571952 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.573987 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.654507 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.775098 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") pod \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.775237 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") pod \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.775321 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") pod \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\" (UID: \"008de9a1-3447-4f73-ab0e-f1b6d234a1de\") " Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.778167 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities" (OuterVolumeSpecName: "utilities") pod "008de9a1-3447-4f73-ab0e-f1b6d234a1de" (UID: "008de9a1-3447-4f73-ab0e-f1b6d234a1de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787658 5120 generic.go:358] "Generic (PLEG): container finished" podID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" exitCode=0 Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787833 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63"} Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787914 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s2r5j" event={"ID":"008de9a1-3447-4f73-ab0e-f1b6d234a1de","Type":"ContainerDied","Data":"274716f9fe261a0e6693c3f5dc6652a5f7b6c5afa3fae9c6eb706d245a939590"} Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.787947 5120 scope.go:117] "RemoveContainer" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.788288 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s2r5j" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.789315 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h" (OuterVolumeSpecName: "kube-api-access-25l4h") pod "008de9a1-3447-4f73-ab0e-f1b6d234a1de" (UID: "008de9a1-3447-4f73-ab0e-f1b6d234a1de"). InnerVolumeSpecName "kube-api-access-25l4h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.837869 5120 scope.go:117] "RemoveContainer" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.865048 5120 scope.go:117] "RemoveContainer" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.877151 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25l4h\" (UniqueName: \"kubernetes.io/projected/008de9a1-3447-4f73-ab0e-f1b6d234a1de-kube-api-access-25l4h\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.877186 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.887242 5120 scope.go:117] "RemoveContainer" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.888048 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63\": container with ID starting with 9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63 not found: ID does not exist" containerID="9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888105 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63"} err="failed to get container status \"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63\": rpc error: code = NotFound desc = could not find container \"9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63\": container with ID starting with 9f06a2a3fb279baa3556c568717d0d3771fc8977f50818333dc9c9257ea5db63 not found: ID does not exist" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888174 5120 scope.go:117] "RemoveContainer" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.888605 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9\": container with ID starting with fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9 not found: ID does not exist" containerID="fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888764 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9"} err="failed to get container status \"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9\": rpc error: code = NotFound desc = could not find container \"fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9\": container with ID starting with fc64e890e47308fa48da847f52304ded8766d2277cb7c3f4eab838179ed516b9 not found: ID does not exist" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.888943 5120 scope.go:117] "RemoveContainer" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" Jan 22 12:42:23 crc kubenswrapper[5120]: E0122 12:42:23.889384 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727\": container with ID starting with 654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727 not found: ID does not exist" containerID="654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727" Jan 22 12:42:23 crc kubenswrapper[5120]: I0122 12:42:23.889413 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727"} err="failed to get container status \"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727\": rpc error: code = NotFound desc = could not find container \"654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727\": container with ID starting with 654ca3a909be53269e25fca9d091b52845269cbeb2a88b176cb8a795b00ca727 not found: ID does not exist" Jan 22 12:42:24 crc kubenswrapper[5120]: I0122 12:42:24.825362 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008de9a1-3447-4f73-ab0e-f1b6d234a1de" (UID: "008de9a1-3447-4f73-ab0e-f1b6d234a1de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:42:24 crc kubenswrapper[5120]: I0122 12:42:24.892710 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008de9a1-3447-4f73-ab0e-f1b6d234a1de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:42:25 crc kubenswrapper[5120]: I0122 12:42:25.031781 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:25 crc kubenswrapper[5120]: I0122 12:42:25.038390 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s2r5j"] Jan 22 12:42:25 crc kubenswrapper[5120]: I0122 12:42:25.583504 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" path="/var/lib/kubelet/pods/008de9a1-3447-4f73-ab0e-f1b6d234a1de/volumes" Jan 22 12:42:35 crc kubenswrapper[5120]: I0122 12:42:35.582856 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:35 crc kubenswrapper[5120]: E0122 12:42:35.584073 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:48 crc kubenswrapper[5120]: I0122 12:42:48.571602 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:42:48 crc kubenswrapper[5120]: E0122 12:42:48.572632 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.068521 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.079125 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.083321 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:42:52 crc kubenswrapper[5120]: I0122 12:42:52.093384 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:43:02 crc kubenswrapper[5120]: I0122 12:43:02.571348 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:02 crc kubenswrapper[5120]: E0122 12:43:02.572400 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:03 crc kubenswrapper[5120]: I0122 12:43:03.938016 5120 scope.go:117] "RemoveContainer" containerID="91b445bc688764113fdba4792727a51c31d4ee1ea49e151d6ba316bfc799e5a0" Jan 22 12:43:17 crc kubenswrapper[5120]: I0122 12:43:17.573608 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:17 crc kubenswrapper[5120]: E0122 12:43:17.574868 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.357898 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359777 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359803 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359840 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-utilities" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359853 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-utilities" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359926 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-content" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.359938 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="extract-content" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.360183 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="008de9a1-3447-4f73-ab0e-f1b6d234a1de" containerName="registry-server" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.366600 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.378118 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.475585 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.475688 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.475743 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.579519 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.579605 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.579639 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.580226 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.581731 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.619824 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"certified-operators-rp4g5\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:25 crc kubenswrapper[5120]: I0122 12:43:25.715257 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.168059 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.168898 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.417267 5120 generic.go:358] "Generic (PLEG): container finished" podID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerID="8438862cfd80a291a8ce8d21963ab85a62a3192253e9207c21bfb82f7e78df12" exitCode=0 Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.417345 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"8438862cfd80a291a8ce8d21963ab85a62a3192253e9207c21bfb82f7e78df12"} Jan 22 12:43:26 crc kubenswrapper[5120]: I0122 12:43:26.417416 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerStarted","Data":"79b9ed715683bd1289c5364bc8a7b9157725298506287b161a5bb30a388ac7a4"} Jan 22 12:43:27 crc kubenswrapper[5120]: I0122 12:43:27.428101 5120 generic.go:358] "Generic (PLEG): container finished" podID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerID="ee52f0be235791cdfb04c7d77af1b138bf274fd830340153c8f962eccee34da4" exitCode=0 Jan 22 12:43:27 crc kubenswrapper[5120]: I0122 12:43:27.428534 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"ee52f0be235791cdfb04c7d77af1b138bf274fd830340153c8f962eccee34da4"} Jan 22 12:43:28 crc kubenswrapper[5120]: I0122 12:43:28.442882 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerStarted","Data":"47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b"} Jan 22 12:43:28 crc kubenswrapper[5120]: I0122 12:43:28.571771 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:28 crc kubenswrapper[5120]: E0122 12:43:28.572011 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.716263 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.718814 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.772832 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:35 crc kubenswrapper[5120]: I0122 12:43:35.809088 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rp4g5" podStartSLOduration=10.271308444 podStartE2EDuration="10.80905739s" podCreationTimestamp="2026-01-22 12:43:25 +0000 UTC" firstStartedPulling="2026-01-22 12:43:26.418695913 +0000 UTC m=+3341.162644274" lastFinishedPulling="2026-01-22 12:43:26.956444839 +0000 UTC m=+3341.700393220" observedRunningTime="2026-01-22 12:43:28.464246881 +0000 UTC m=+3343.208195252" watchObservedRunningTime="2026-01-22 12:43:35.80905739 +0000 UTC m=+3350.553005761" Jan 22 12:43:36 crc kubenswrapper[5120]: I0122 12:43:36.699247 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:36 crc kubenswrapper[5120]: I0122 12:43:36.752380 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:38 crc kubenswrapper[5120]: I0122 12:43:38.677050 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rp4g5" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" containerID="cri-o://47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b" gracePeriod=2 Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691182 5120 generic.go:358] "Generic (PLEG): container finished" podID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerID="47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b" exitCode=0 Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691242 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b"} Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691794 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4g5" event={"ID":"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10","Type":"ContainerDied","Data":"79b9ed715683bd1289c5364bc8a7b9157725298506287b161a5bb30a388ac7a4"} Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.691887 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b9ed715683bd1289c5364bc8a7b9157725298506287b161a5bb30a388ac7a4" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.696241 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.749891 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") pod \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.749996 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") pod \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.750046 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") pod \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\" (UID: \"83f88177-dcfc-4ca5-bd2e-e35e59f4ff10\") " Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.751482 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities" (OuterVolumeSpecName: "utilities") pod "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" (UID: "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.757710 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw" (OuterVolumeSpecName: "kube-api-access-2nbfw") pod "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" (UID: "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10"). InnerVolumeSpecName "kube-api-access-2nbfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.798177 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" (UID: "83f88177-dcfc-4ca5-bd2e-e35e59f4ff10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.851982 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2nbfw\" (UniqueName: \"kubernetes.io/projected/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-kube-api-access-2nbfw\") on node \"crc\" DevicePath \"\"" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.852023 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:43:39 crc kubenswrapper[5120]: I0122 12:43:39.852034 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:43:40 crc kubenswrapper[5120]: I0122 12:43:40.699576 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4g5" Jan 22 12:43:40 crc kubenswrapper[5120]: I0122 12:43:40.749240 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:40 crc kubenswrapper[5120]: I0122 12:43:40.762098 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rp4g5"] Jan 22 12:43:41 crc kubenswrapper[5120]: I0122 12:43:41.589403 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" path="/var/lib/kubelet/pods/83f88177-dcfc-4ca5-bd2e-e35e59f4ff10/volumes" Jan 22 12:43:42 crc kubenswrapper[5120]: I0122 12:43:42.573349 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:42 crc kubenswrapper[5120]: E0122 12:43:42.573951 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:43:56 crc kubenswrapper[5120]: I0122 12:43:56.572155 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:43:56 crc kubenswrapper[5120]: E0122 12:43:56.573277 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.193511 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195464 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-content" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195488 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-content" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195529 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195539 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195578 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-utilities" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195590 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="extract-utilities" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.195906 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="83f88177-dcfc-4ca5-bd2e-e35e59f4ff10" containerName="registry-server" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.204241 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.206485 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.207187 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.207469 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.208820 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.310472 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"auto-csr-approver-29484764-lssmg\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.412512 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"auto-csr-approver-29484764-lssmg\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.450461 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"auto-csr-approver-29484764-lssmg\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.521985 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:00 crc kubenswrapper[5120]: I0122 12:44:00.998011 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:44:01 crc kubenswrapper[5120]: W0122 12:44:01.005883 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5973be67_1e77_468f_aace_0dc45ba40609.slice/crio-a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3 WatchSource:0}: Error finding container a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3: Status 404 returned error can't find the container with id a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3 Jan 22 12:44:01 crc kubenswrapper[5120]: I0122 12:44:01.890171 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484764-lssmg" event={"ID":"5973be67-1e77-468f-aace-0dc45ba40609","Type":"ContainerStarted","Data":"a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3"} Jan 22 12:44:02 crc kubenswrapper[5120]: I0122 12:44:02.897922 5120 generic.go:358] "Generic (PLEG): container finished" podID="5973be67-1e77-468f-aace-0dc45ba40609" containerID="da1b834fe11918b7b503fbd82eb99354219ce8355dd6b17dd9e4af5acf161805" exitCode=0 Jan 22 12:44:02 crc kubenswrapper[5120]: I0122 12:44:02.898294 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484764-lssmg" event={"ID":"5973be67-1e77-468f-aace-0dc45ba40609","Type":"ContainerDied","Data":"da1b834fe11918b7b503fbd82eb99354219ce8355dd6b17dd9e4af5acf161805"} Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.269029 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.391552 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") pod \"5973be67-1e77-468f-aace-0dc45ba40609\" (UID: \"5973be67-1e77-468f-aace-0dc45ba40609\") " Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.399190 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8" (OuterVolumeSpecName: "kube-api-access-jj4w8") pod "5973be67-1e77-468f-aace-0dc45ba40609" (UID: "5973be67-1e77-468f-aace-0dc45ba40609"). InnerVolumeSpecName "kube-api-access-jj4w8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.494138 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jj4w8\" (UniqueName: \"kubernetes.io/projected/5973be67-1e77-468f-aace-0dc45ba40609-kube-api-access-jj4w8\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.919286 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484764-lssmg" Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.919313 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484764-lssmg" event={"ID":"5973be67-1e77-468f-aace-0dc45ba40609","Type":"ContainerDied","Data":"a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3"} Jan 22 12:44:04 crc kubenswrapper[5120]: I0122 12:44:04.919371 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e3daff4aa94c757a090b07ce6c527ebd5639ec489fe94cabdad9fff91409e3" Jan 22 12:44:05 crc kubenswrapper[5120]: I0122 12:44:05.328750 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:44:05 crc kubenswrapper[5120]: I0122 12:44:05.334029 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484758-hjfwt"] Jan 22 12:44:05 crc kubenswrapper[5120]: I0122 12:44:05.587802 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96479bdf-524d-44cf-84b0-0be4a402a317" path="/var/lib/kubelet/pods/96479bdf-524d-44cf-84b0-0be4a402a317/volumes" Jan 22 12:44:11 crc kubenswrapper[5120]: I0122 12:44:11.572563 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:11 crc kubenswrapper[5120]: E0122 12:44:11.573878 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:23 crc kubenswrapper[5120]: I0122 12:44:23.571755 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:23 crc kubenswrapper[5120]: E0122 12:44:23.574492 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.315607 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.317671 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5973be67-1e77-468f-aace-0dc45ba40609" containerName="oc" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.317798 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="5973be67-1e77-468f-aace-0dc45ba40609" containerName="oc" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.318112 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="5973be67-1e77-468f-aace-0dc45ba40609" containerName="oc" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.354353 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.354764 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.491801 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.492117 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.492231 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.593629 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.593755 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.593800 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.594258 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.594288 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.614249 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"community-operators-2k92n\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.684185 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:26 crc kubenswrapper[5120]: I0122 12:44:26.990377 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:27 crc kubenswrapper[5120]: I0122 12:44:27.136120 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerStarted","Data":"1a6185c4923561c56fb91a67e997cf57542fd8b5fcf6e9a8a76e540b46ee71dc"} Jan 22 12:44:28 crc kubenswrapper[5120]: I0122 12:44:28.147436 5120 generic.go:358] "Generic (PLEG): container finished" podID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerID="df894cbd14d111aa39e607965a1b6af460e8994f5050da70ec1fedf59572b128" exitCode=0 Jan 22 12:44:28 crc kubenswrapper[5120]: I0122 12:44:28.147520 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"df894cbd14d111aa39e607965a1b6af460e8994f5050da70ec1fedf59572b128"} Jan 22 12:44:29 crc kubenswrapper[5120]: I0122 12:44:29.158290 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerStarted","Data":"7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87"} Jan 22 12:44:30 crc kubenswrapper[5120]: I0122 12:44:30.170301 5120 generic.go:358] "Generic (PLEG): container finished" podID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerID="7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87" exitCode=0 Jan 22 12:44:30 crc kubenswrapper[5120]: I0122 12:44:30.170359 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87"} Jan 22 12:44:31 crc kubenswrapper[5120]: I0122 12:44:31.182507 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerStarted","Data":"d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d"} Jan 22 12:44:31 crc kubenswrapper[5120]: I0122 12:44:31.214217 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2k92n" podStartSLOduration=4.559514791 podStartE2EDuration="5.214198686s" podCreationTimestamp="2026-01-22 12:44:26 +0000 UTC" firstStartedPulling="2026-01-22 12:44:28.149174033 +0000 UTC m=+3402.893122415" lastFinishedPulling="2026-01-22 12:44:28.803857939 +0000 UTC m=+3403.547806310" observedRunningTime="2026-01-22 12:44:31.20341088 +0000 UTC m=+3405.947359231" watchObservedRunningTime="2026-01-22 12:44:31.214198686 +0000 UTC m=+3405.958147027" Jan 22 12:44:36 crc kubenswrapper[5120]: I0122 12:44:36.684713 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:36 crc kubenswrapper[5120]: I0122 12:44:36.685270 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:36 crc kubenswrapper[5120]: I0122 12:44:36.751837 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:37 crc kubenswrapper[5120]: I0122 12:44:37.297774 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:37 crc kubenswrapper[5120]: I0122 12:44:37.350819 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:37 crc kubenswrapper[5120]: I0122 12:44:37.572396 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:37 crc kubenswrapper[5120]: E0122 12:44:37.572907 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:44:39 crc kubenswrapper[5120]: I0122 12:44:39.254619 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2k92n" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" containerID="cri-o://d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d" gracePeriod=2 Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.269762 5120 generic.go:358] "Generic (PLEG): container finished" podID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerID="d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d" exitCode=0 Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.269874 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d"} Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.782336 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.943814 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") pod \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.943947 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") pod \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.944016 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") pod \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\" (UID: \"36231898-a2c8-4be7-bd5b-c69ebfb5d706\") " Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.945265 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities" (OuterVolumeSpecName: "utilities") pod "36231898-a2c8-4be7-bd5b-c69ebfb5d706" (UID: "36231898-a2c8-4be7-bd5b-c69ebfb5d706"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:44:40 crc kubenswrapper[5120]: I0122 12:44:40.951309 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn" (OuterVolumeSpecName: "kube-api-access-wb8wn") pod "36231898-a2c8-4be7-bd5b-c69ebfb5d706" (UID: "36231898-a2c8-4be7-bd5b-c69ebfb5d706"). InnerVolumeSpecName "kube-api-access-wb8wn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.021241 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36231898-a2c8-4be7-bd5b-c69ebfb5d706" (UID: "36231898-a2c8-4be7-bd5b-c69ebfb5d706"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.045514 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.045547 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wb8wn\" (UniqueName: \"kubernetes.io/projected/36231898-a2c8-4be7-bd5b-c69ebfb5d706-kube-api-access-wb8wn\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.045559 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36231898-a2c8-4be7-bd5b-c69ebfb5d706-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.284311 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2k92n" event={"ID":"36231898-a2c8-4be7-bd5b-c69ebfb5d706","Type":"ContainerDied","Data":"1a6185c4923561c56fb91a67e997cf57542fd8b5fcf6e9a8a76e540b46ee71dc"} Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.284386 5120 scope.go:117] "RemoveContainer" containerID="d71d985c8946c0712b29a8794ea0adff138ddb012d5ad67ecb339e9f3ec13b3d" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.284590 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2k92n" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.320762 5120 scope.go:117] "RemoveContainer" containerID="7577cc176c59d8a1b850253c91e79633c88e50d0033b12cbbbe51ac9e566cb87" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.337731 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.347770 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2k92n"] Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.354862 5120 scope.go:117] "RemoveContainer" containerID="df894cbd14d111aa39e607965a1b6af460e8994f5050da70ec1fedf59572b128" Jan 22 12:44:41 crc kubenswrapper[5120]: I0122 12:44:41.586307 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" path="/var/lib/kubelet/pods/36231898-a2c8-4be7-bd5b-c69ebfb5d706/volumes" Jan 22 12:44:51 crc kubenswrapper[5120]: I0122 12:44:51.573875 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:44:51 crc kubenswrapper[5120]: E0122 12:44:51.575529 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.183851 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m"] Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.187865 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188044 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188172 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-content" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188255 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-content" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.188831 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-utilities" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.189042 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="extract-utilities" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.189356 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="36231898-a2c8-4be7-bd5b-c69ebfb5d706" containerName="registry-server" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.198285 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m"] Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.198535 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.201242 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.207928 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.277047 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.277145 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.277458 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.378749 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.378901 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.379053 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.380585 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.390661 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.410752 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"collect-profiles-29484765-s285m\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:00 crc kubenswrapper[5120]: I0122 12:45:00.538518 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.009616 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m"] Jan 22 12:45:01 crc kubenswrapper[5120]: W0122 12:45:01.019360 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod882b7ca2_9793_49f3_b5e8_883119a96591.slice/crio-731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72 WatchSource:0}: Error finding container 731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72: Status 404 returned error can't find the container with id 731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72 Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.483236 5120 generic.go:358] "Generic (PLEG): container finished" podID="882b7ca2-9793-49f3-b5e8-883119a96591" containerID="500bb78c536c9c94640d7c27f7b87d17493e14dcebcc3f4e10a31b030bc88263" exitCode=0 Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.483423 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" event={"ID":"882b7ca2-9793-49f3-b5e8-883119a96591","Type":"ContainerDied","Data":"500bb78c536c9c94640d7c27f7b87d17493e14dcebcc3f4e10a31b030bc88263"} Jan 22 12:45:01 crc kubenswrapper[5120]: I0122 12:45:01.483749 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" event={"ID":"882b7ca2-9793-49f3-b5e8-883119a96591","Type":"ContainerStarted","Data":"731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72"} Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.807558 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.926275 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") pod \"882b7ca2-9793-49f3-b5e8-883119a96591\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.927529 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") pod \"882b7ca2-9793-49f3-b5e8-883119a96591\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.927830 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") pod \"882b7ca2-9793-49f3-b5e8-883119a96591\" (UID: \"882b7ca2-9793-49f3-b5e8-883119a96591\") " Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.931223 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume" (OuterVolumeSpecName: "config-volume") pod "882b7ca2-9793-49f3-b5e8-883119a96591" (UID: "882b7ca2-9793-49f3-b5e8-883119a96591"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.935358 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf" (OuterVolumeSpecName: "kube-api-access-4hmgf") pod "882b7ca2-9793-49f3-b5e8-883119a96591" (UID: "882b7ca2-9793-49f3-b5e8-883119a96591"). InnerVolumeSpecName "kube-api-access-4hmgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:45:02 crc kubenswrapper[5120]: I0122 12:45:02.942784 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "882b7ca2-9793-49f3-b5e8-883119a96591" (UID: "882b7ca2-9793-49f3-b5e8-883119a96591"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.031469 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/882b7ca2-9793-49f3-b5e8-883119a96591-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.031529 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/882b7ca2-9793-49f3-b5e8-883119a96591-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.031553 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hmgf\" (UniqueName: \"kubernetes.io/projected/882b7ca2-9793-49f3-b5e8-883119a96591-kube-api-access-4hmgf\") on node \"crc\" DevicePath \"\"" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.504242 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" event={"ID":"882b7ca2-9793-49f3-b5e8-883119a96591","Type":"ContainerDied","Data":"731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72"} Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.504628 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="731d7ae4d4b54d14bb5f9952899b0ab25b53091a4668c97c1dd873d614957f72" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.504535 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484765-s285m" Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.878483 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:45:03 crc kubenswrapper[5120]: I0122 12:45:03.887845 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484720-bt5vq"] Jan 22 12:45:04 crc kubenswrapper[5120]: I0122 12:45:04.098059 5120 scope.go:117] "RemoveContainer" containerID="7a805c4c05ced5399f3bf914b6a245f885524c5c4ac80c4ac8f87f8faa63c41b" Jan 22 12:45:05 crc kubenswrapper[5120]: I0122 12:45:05.577102 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:05 crc kubenswrapper[5120]: E0122 12:45:05.577563 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:05 crc kubenswrapper[5120]: I0122 12:45:05.584944 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57ca8ee-4b8e-4b45-983a-11332a457cf8" path="/var/lib/kubelet/pods/d57ca8ee-4b8e-4b45-983a-11332a457cf8/volumes" Jan 22 12:45:16 crc kubenswrapper[5120]: I0122 12:45:16.571818 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:16 crc kubenswrapper[5120]: E0122 12:45:16.574474 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:28 crc kubenswrapper[5120]: I0122 12:45:28.573476 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:28 crc kubenswrapper[5120]: E0122 12:45:28.574858 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:40 crc kubenswrapper[5120]: I0122 12:45:40.572491 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:40 crc kubenswrapper[5120]: E0122 12:45:40.573573 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:45:51 crc kubenswrapper[5120]: I0122 12:45:51.572612 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:45:51 crc kubenswrapper[5120]: E0122 12:45:51.573427 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.151063 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.152630 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="882b7ca2-9793-49f3-b5e8-883119a96591" containerName="collect-profiles" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.152644 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="882b7ca2-9793-49f3-b5e8-883119a96591" containerName="collect-profiles" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.153001 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="882b7ca2-9793-49f3-b5e8-883119a96591" containerName="collect-profiles" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.158420 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.161944 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.162353 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.164226 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.170258 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.340268 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"auto-csr-approver-29484766-r7mx5\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.441791 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"auto-csr-approver-29484766-r7mx5\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.475755 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"auto-csr-approver-29484766-r7mx5\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.481621 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:00 crc kubenswrapper[5120]: I0122 12:46:00.991827 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:46:01 crc kubenswrapper[5120]: I0122 12:46:01.056501 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" event={"ID":"f4e688fc-6166-4472-9385-e06fa5bc818b","Type":"ContainerStarted","Data":"57ffd3f0749e58b9469f2abea60a7fcc1c2e503b0e7f4355ea0b617ec51ab4a1"} Jan 22 12:46:03 crc kubenswrapper[5120]: I0122 12:46:03.100871 5120 generic.go:358] "Generic (PLEG): container finished" podID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerID="93255bc069317c1b98c7e5d464d634946dfb59ed2823b2a9ae9c562272242064" exitCode=0 Jan 22 12:46:03 crc kubenswrapper[5120]: I0122 12:46:03.101029 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" event={"ID":"f4e688fc-6166-4472-9385-e06fa5bc818b","Type":"ContainerDied","Data":"93255bc069317c1b98c7e5d464d634946dfb59ed2823b2a9ae9c562272242064"} Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.292109 5120 scope.go:117] "RemoveContainer" containerID="73df242a325822ccf1cead216fb72d99d7eb4b7f40cfe98bdeb214c25306e468" Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.438125 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.511229 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") pod \"f4e688fc-6166-4472-9385-e06fa5bc818b\" (UID: \"f4e688fc-6166-4472-9385-e06fa5bc818b\") " Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.518346 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp" (OuterVolumeSpecName: "kube-api-access-6q6vp") pod "f4e688fc-6166-4472-9385-e06fa5bc818b" (UID: "f4e688fc-6166-4472-9385-e06fa5bc818b"). InnerVolumeSpecName "kube-api-access-6q6vp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:46:04 crc kubenswrapper[5120]: I0122 12:46:04.614582 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6q6vp\" (UniqueName: \"kubernetes.io/projected/f4e688fc-6166-4472-9385-e06fa5bc818b-kube-api-access-6q6vp\") on node \"crc\" DevicePath \"\"" Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.124970 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" event={"ID":"f4e688fc-6166-4472-9385-e06fa5bc818b","Type":"ContainerDied","Data":"57ffd3f0749e58b9469f2abea60a7fcc1c2e503b0e7f4355ea0b617ec51ab4a1"} Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.125342 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57ffd3f0749e58b9469f2abea60a7fcc1c2e503b0e7f4355ea0b617ec51ab4a1" Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.124984 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484766-r7mx5" Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.538617 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.549563 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484760-9gmsd"] Jan 22 12:46:05 crc kubenswrapper[5120]: I0122 12:46:05.602717 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaee48fe-e9ab-42e2-926c-6d27414eec47" path="/var/lib/kubelet/pods/eaee48fe-e9ab-42e2-926c-6d27414eec47/volumes" Jan 22 12:46:06 crc kubenswrapper[5120]: I0122 12:46:06.571412 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:46:07 crc kubenswrapper[5120]: I0122 12:46:07.144280 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05"} Jan 22 12:47:04 crc kubenswrapper[5120]: I0122 12:47:04.371669 5120 scope.go:117] "RemoveContainer" containerID="d38a722a84b2b8810e74617131f1d0281e3449f071650edfa7fce4122e413c26" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.235440 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.236755 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.248929 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:47:52 crc kubenswrapper[5120]: I0122 12:47:52.249083 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.142462 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.144620 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerName="oc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.144640 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerName="oc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.144846 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" containerName="oc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.152370 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.157700 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.157943 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.157749 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.163247 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.235844 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"auto-csr-approver-29484768-cfmpc\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.338083 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"auto-csr-approver-29484768-cfmpc\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.378476 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"auto-csr-approver-29484768-cfmpc\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.481263 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.810532 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:48:00 crc kubenswrapper[5120]: I0122 12:48:00.931262 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerStarted","Data":"1e40c7568c23c2fbd806122ab6571af7e58c89c117d744276fc9ff6c70409e6c"} Jan 22 12:48:02 crc kubenswrapper[5120]: I0122 12:48:02.947834 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerStarted","Data":"daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13"} Jan 22 12:48:03 crc kubenswrapper[5120]: I0122 12:48:03.959738 5120 generic.go:358] "Generic (PLEG): container finished" podID="f1196931-91a2-4869-bff6-80785ee0ed43" containerID="daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13" exitCode=0 Jan 22 12:48:03 crc kubenswrapper[5120]: I0122 12:48:03.959859 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerDied","Data":"daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13"} Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.260905 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.431916 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") pod \"f1196931-91a2-4869-bff6-80785ee0ed43\" (UID: \"f1196931-91a2-4869-bff6-80785ee0ed43\") " Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.438247 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc" (OuterVolumeSpecName: "kube-api-access-zpztc") pod "f1196931-91a2-4869-bff6-80785ee0ed43" (UID: "f1196931-91a2-4869-bff6-80785ee0ed43"). InnerVolumeSpecName "kube-api-access-zpztc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.533517 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zpztc\" (UniqueName: \"kubernetes.io/projected/f1196931-91a2-4869-bff6-80785ee0ed43-kube-api-access-zpztc\") on node \"crc\" DevicePath \"\"" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.981637 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.981678 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484768-cfmpc" event={"ID":"f1196931-91a2-4869-bff6-80785ee0ed43","Type":"ContainerDied","Data":"1e40c7568c23c2fbd806122ab6571af7e58c89c117d744276fc9ff6c70409e6c"} Jan 22 12:48:05 crc kubenswrapper[5120]: I0122 12:48:05.981888 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e40c7568c23c2fbd806122ab6571af7e58c89c117d744276fc9ff6c70409e6c" Jan 22 12:48:06 crc kubenswrapper[5120]: I0122 12:48:06.019260 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:48:06 crc kubenswrapper[5120]: I0122 12:48:06.024027 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484762-tjrcq"] Jan 22 12:48:07 crc kubenswrapper[5120]: I0122 12:48:07.582823 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4579a92b-d731-4627-b131-998575817977" path="/var/lib/kubelet/pods/4579a92b-d731-4627-b131-998575817977/volumes" Jan 22 12:48:31 crc kubenswrapper[5120]: I0122 12:48:31.972700 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:48:31 crc kubenswrapper[5120]: I0122 12:48:31.973259 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:01 crc kubenswrapper[5120]: I0122 12:49:01.972562 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:49:01 crc kubenswrapper[5120]: I0122 12:49:01.973300 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:04 crc kubenswrapper[5120]: I0122 12:49:04.555483 5120 scope.go:117] "RemoveContainer" containerID="2a6f5b0d983a897bcecca87bafc7ac00eaf5f0a889d5650209a6e10cf38669b5" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.972187 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.974047 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.974525 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.975268 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:49:31 crc kubenswrapper[5120]: I0122 12:49:31.975406 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05" gracePeriod=600 Jan 22 12:49:32 crc kubenswrapper[5120]: I0122 12:49:32.609696 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.258558 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05" exitCode=0 Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.259448 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05"} Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.260163 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e"} Jan 22 12:49:33 crc kubenswrapper[5120]: I0122 12:49:33.260196 5120 scope.go:117] "RemoveContainer" containerID="cc40d4bcc65892547f86eaafc1dc9dcde42f467dc0cf6f78c66127d13693b626" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.146440 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.147736 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" containerName="oc" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.147757 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" containerName="oc" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.147943 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" containerName="oc" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.176586 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.176656 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.178934 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.179104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.180081 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"auto-csr-approver-29484770-td669\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.180134 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.281160 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"auto-csr-approver-29484770-td669\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.307151 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"auto-csr-approver-29484770-td669\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:00 crc kubenswrapper[5120]: I0122 12:50:00.499605 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:01 crc kubenswrapper[5120]: I0122 12:50:01.007655 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:50:01 crc kubenswrapper[5120]: I0122 12:50:01.523764 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerStarted","Data":"d1c95edc24c0d39e7de4f3b0e81675dd758bdd9d2a6b7cd372aedc16d036dce8"} Jan 22 12:50:02 crc kubenswrapper[5120]: I0122 12:50:02.534604 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerStarted","Data":"727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483"} Jan 22 12:50:02 crc kubenswrapper[5120]: I0122 12:50:02.553087 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484770-td669" podStartSLOduration=1.5382092379999999 podStartE2EDuration="2.553048203s" podCreationTimestamp="2026-01-22 12:50:00 +0000 UTC" firstStartedPulling="2026-01-22 12:50:01.022022133 +0000 UTC m=+3735.765970474" lastFinishedPulling="2026-01-22 12:50:02.036861058 +0000 UTC m=+3736.780809439" observedRunningTime="2026-01-22 12:50:02.550524734 +0000 UTC m=+3737.294473075" watchObservedRunningTime="2026-01-22 12:50:02.553048203 +0000 UTC m=+3737.296996544" Jan 22 12:50:03 crc kubenswrapper[5120]: I0122 12:50:03.543986 5120 generic.go:358] "Generic (PLEG): container finished" podID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerID="727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483" exitCode=0 Jan 22 12:50:03 crc kubenswrapper[5120]: I0122 12:50:03.544078 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerDied","Data":"727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483"} Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.740206 5120 scope.go:117] "RemoveContainer" containerID="ee52f0be235791cdfb04c7d77af1b138bf274fd830340153c8f962eccee34da4" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.776017 5120 scope.go:117] "RemoveContainer" containerID="8438862cfd80a291a8ce8d21963ab85a62a3192253e9207c21bfb82f7e78df12" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.801362 5120 scope.go:117] "RemoveContainer" containerID="47afaf343a8e57a2141b4fca7f97fbb2810bf0c2eee6c99703640a2db6eb664b" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.877402 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.958647 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") pod \"730d9559-f767-44f0-9346-cfba60c8f1b5\" (UID: \"730d9559-f767-44f0-9346-cfba60c8f1b5\") " Jan 22 12:50:04 crc kubenswrapper[5120]: I0122 12:50:04.966942 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574" (OuterVolumeSpecName: "kube-api-access-w6574") pod "730d9559-f767-44f0-9346-cfba60c8f1b5" (UID: "730d9559-f767-44f0-9346-cfba60c8f1b5"). InnerVolumeSpecName "kube-api-access-w6574". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.061087 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w6574\" (UniqueName: \"kubernetes.io/projected/730d9559-f767-44f0-9346-cfba60c8f1b5-kube-api-access-w6574\") on node \"crc\" DevicePath \"\"" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.567088 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484770-td669" event={"ID":"730d9559-f767-44f0-9346-cfba60c8f1b5","Type":"ContainerDied","Data":"d1c95edc24c0d39e7de4f3b0e81675dd758bdd9d2a6b7cd372aedc16d036dce8"} Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.567367 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c95edc24c0d39e7de4f3b0e81675dd758bdd9d2a6b7cd372aedc16d036dce8" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.567104 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484770-td669" Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.630840 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:50:05 crc kubenswrapper[5120]: I0122 12:50:05.640843 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484764-lssmg"] Jan 22 12:50:07 crc kubenswrapper[5120]: I0122 12:50:07.589567 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5973be67-1e77-468f-aace-0dc45ba40609" path="/var/lib/kubelet/pods/5973be67-1e77-468f-aace-0dc45ba40609/volumes" Jan 22 12:51:04 crc kubenswrapper[5120]: I0122 12:51:04.964133 5120 scope.go:117] "RemoveContainer" containerID="da1b834fe11918b7b503fbd82eb99354219ce8355dd6b17dd9e4af5acf161805" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.151905 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.153279 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerName="oc" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.153293 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerName="oc" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.153421 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" containerName="oc" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.172009 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.172150 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.181839 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.182032 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.182404 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.283931 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"auto-csr-approver-29484772-rwp4t\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.386757 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"auto-csr-approver-29484772-rwp4t\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.424841 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"auto-csr-approver-29484772-rwp4t\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.501809 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:00 crc kubenswrapper[5120]: I0122 12:52:00.753145 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:52:01 crc kubenswrapper[5120]: I0122 12:52:01.678576 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" event={"ID":"813b78c4-6644-444f-baa4-af92c9a1bfd0","Type":"ContainerStarted","Data":"e77e3facd6342da7f82a78dd95e5ff9cfa5f434248b1f49ab7b339060ac887ef"} Jan 22 12:52:01 crc kubenswrapper[5120]: I0122 12:52:01.973200 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:52:01 crc kubenswrapper[5120]: I0122 12:52:01.973272 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:52:02 crc kubenswrapper[5120]: I0122 12:52:02.687896 5120 generic.go:358] "Generic (PLEG): container finished" podID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerID="ff8af05f7b27c4b094ab8e8f34a856e723d09850f96dc8e0d652385ae56780a8" exitCode=0 Jan 22 12:52:02 crc kubenswrapper[5120]: I0122 12:52:02.688191 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" event={"ID":"813b78c4-6644-444f-baa4-af92c9a1bfd0","Type":"ContainerDied","Data":"ff8af05f7b27c4b094ab8e8f34a856e723d09850f96dc8e0d652385ae56780a8"} Jan 22 12:52:03 crc kubenswrapper[5120]: I0122 12:52:03.972832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.049181 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") pod \"813b78c4-6644-444f-baa4-af92c9a1bfd0\" (UID: \"813b78c4-6644-444f-baa4-af92c9a1bfd0\") " Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.075182 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls" (OuterVolumeSpecName: "kube-api-access-z2kls") pod "813b78c4-6644-444f-baa4-af92c9a1bfd0" (UID: "813b78c4-6644-444f-baa4-af92c9a1bfd0"). InnerVolumeSpecName "kube-api-access-z2kls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.151023 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z2kls\" (UniqueName: \"kubernetes.io/projected/813b78c4-6644-444f-baa4-af92c9a1bfd0-kube-api-access-z2kls\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.709835 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.710066 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484772-rwp4t" event={"ID":"813b78c4-6644-444f-baa4-af92c9a1bfd0","Type":"ContainerDied","Data":"e77e3facd6342da7f82a78dd95e5ff9cfa5f434248b1f49ab7b339060ac887ef"} Jan 22 12:52:04 crc kubenswrapper[5120]: I0122 12:52:04.710136 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e77e3facd6342da7f82a78dd95e5ff9cfa5f434248b1f49ab7b339060ac887ef" Jan 22 12:52:05 crc kubenswrapper[5120]: I0122 12:52:05.046246 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:52:05 crc kubenswrapper[5120]: I0122 12:52:05.053391 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484766-r7mx5"] Jan 22 12:52:05 crc kubenswrapper[5120]: I0122 12:52:05.587493 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e688fc-6166-4472-9385-e06fa5bc818b" path="/var/lib/kubelet/pods/f4e688fc-6166-4472-9385-e06fa5bc818b/volumes" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.215353 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.218151 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerName="oc" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.218216 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerName="oc" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.218506 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" containerName="oc" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.259248 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.259524 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.372761 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.372819 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.373009 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.473967 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.474017 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.474062 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.474670 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.476223 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.493712 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"redhat-operators-2mccj\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.584150 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.828822 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:27 crc kubenswrapper[5120]: I0122 12:52:27.959314 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerStarted","Data":"4098ba92d3db347255dbcff80e5f1759f819e0281dbb289fcce2e22253a6b5a2"} Jan 22 12:52:28 crc kubenswrapper[5120]: I0122 12:52:28.968800 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" exitCode=0 Jan 22 12:52:28 crc kubenswrapper[5120]: I0122 12:52:28.968952 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363"} Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.972326 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.972902 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.995852 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" exitCode=0 Jan 22 12:52:31 crc kubenswrapper[5120]: I0122 12:52:31.996023 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7"} Jan 22 12:52:33 crc kubenswrapper[5120]: I0122 12:52:33.007585 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerStarted","Data":"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d"} Jan 22 12:52:33 crc kubenswrapper[5120]: I0122 12:52:33.035104 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2mccj" podStartSLOduration=4.00324177 podStartE2EDuration="6.035081911s" podCreationTimestamp="2026-01-22 12:52:27 +0000 UTC" firstStartedPulling="2026-01-22 12:52:28.970631167 +0000 UTC m=+3883.714579538" lastFinishedPulling="2026-01-22 12:52:31.002471338 +0000 UTC m=+3885.746419679" observedRunningTime="2026-01-22 12:52:33.030680059 +0000 UTC m=+3887.774628490" watchObservedRunningTime="2026-01-22 12:52:33.035081911 +0000 UTC m=+3887.779030262" Jan 22 12:52:37 crc kubenswrapper[5120]: I0122 12:52:37.585125 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:37 crc kubenswrapper[5120]: I0122 12:52:37.585733 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:38 crc kubenswrapper[5120]: I0122 12:52:38.634884 5120 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2mccj" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" probeResult="failure" output=< Jan 22 12:52:38 crc kubenswrapper[5120]: timeout: failed to connect service ":50051" within 1s Jan 22 12:52:38 crc kubenswrapper[5120]: > Jan 22 12:52:47 crc kubenswrapper[5120]: I0122 12:52:47.655284 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:47 crc kubenswrapper[5120]: I0122 12:52:47.717637 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:47 crc kubenswrapper[5120]: I0122 12:52:47.904007 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.165546 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2mccj" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" containerID="cri-o://b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" gracePeriod=2 Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.547679 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.660316 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") pod \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.660437 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") pod \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.660456 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") pod \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\" (UID: \"2c0d5290-04f4-4490-ad2f-54d0bf67056d\") " Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.675241 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities" (OuterVolumeSpecName: "utilities") pod "2c0d5290-04f4-4490-ad2f-54d0bf67056d" (UID: "2c0d5290-04f4-4490-ad2f-54d0bf67056d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.680555 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5" (OuterVolumeSpecName: "kube-api-access-c75c5") pod "2c0d5290-04f4-4490-ad2f-54d0bf67056d" (UID: "2c0d5290-04f4-4490-ad2f-54d0bf67056d"). InnerVolumeSpecName "kube-api-access-c75c5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.755673 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c0d5290-04f4-4490-ad2f-54d0bf67056d" (UID: "2c0d5290-04f4-4490-ad2f-54d0bf67056d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.762718 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c75c5\" (UniqueName: \"kubernetes.io/projected/2c0d5290-04f4-4490-ad2f-54d0bf67056d-kube-api-access-c75c5\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.762869 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:49 crc kubenswrapper[5120]: I0122 12:52:49.762953 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0d5290-04f4-4490-ad2f-54d0bf67056d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.200814 5120 generic.go:358] "Generic (PLEG): container finished" podID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" exitCode=0 Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.200999 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d"} Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.201478 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2mccj" event={"ID":"2c0d5290-04f4-4490-ad2f-54d0bf67056d","Type":"ContainerDied","Data":"4098ba92d3db347255dbcff80e5f1759f819e0281dbb289fcce2e22253a6b5a2"} Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.201121 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2mccj" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.201568 5120 scope.go:117] "RemoveContainer" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.235691 5120 scope.go:117] "RemoveContainer" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.252686 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.259314 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2mccj"] Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.274344 5120 scope.go:117] "RemoveContainer" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.304232 5120 scope.go:117] "RemoveContainer" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" Jan 22 12:52:50 crc kubenswrapper[5120]: E0122 12:52:50.305049 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d\": container with ID starting with b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d not found: ID does not exist" containerID="b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305103 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d"} err="failed to get container status \"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d\": rpc error: code = NotFound desc = could not find container \"b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d\": container with ID starting with b5d07cd20e5a85c731aa5c021d474c6528646651fba3a07c09e5f46778b64c0d not found: ID does not exist" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305138 5120 scope.go:117] "RemoveContainer" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" Jan 22 12:52:50 crc kubenswrapper[5120]: E0122 12:52:50.305622 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7\": container with ID starting with 7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7 not found: ID does not exist" containerID="7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305674 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7"} err="failed to get container status \"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7\": rpc error: code = NotFound desc = could not find container \"7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7\": container with ID starting with 7fad32a96b7e772564230b9fea865007345d5f66f7e10b57f4c5d9abf74358b7 not found: ID does not exist" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.305737 5120 scope.go:117] "RemoveContainer" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" Jan 22 12:52:50 crc kubenswrapper[5120]: E0122 12:52:50.306205 5120 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363\": container with ID starting with 162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363 not found: ID does not exist" containerID="162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363" Jan 22 12:52:50 crc kubenswrapper[5120]: I0122 12:52:50.306248 5120 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363"} err="failed to get container status \"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363\": rpc error: code = NotFound desc = could not find container \"162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363\": container with ID starting with 162b16dff8c9528bc383815df37636ea7bbb576c5690a25382665ab3c39c3363 not found: ID does not exist" Jan 22 12:52:51 crc kubenswrapper[5120]: I0122 12:52:51.586575 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" path="/var/lib/kubelet/pods/2c0d5290-04f4-4490-ad2f-54d0bf67056d/volumes" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.349090 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.352297 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.359382 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:52:52 crc kubenswrapper[5120]: I0122 12:52:52.361327 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.972316 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.972907 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.972971 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.973650 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 12:53:01 crc kubenswrapper[5120]: I0122 12:53:01.973708 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" gracePeriod=600 Jan 22 12:53:02 crc kubenswrapper[5120]: E0122 12:53:02.109278 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.310593 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" exitCode=0 Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.310638 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e"} Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.310708 5120 scope.go:117] "RemoveContainer" containerID="29f33e1dc1313dbff18da2384f7e62acd6b281793c17af877e5bfeb2aa570d05" Jan 22 12:53:02 crc kubenswrapper[5120]: I0122 12:53:02.311506 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:02 crc kubenswrapper[5120]: E0122 12:53:02.312149 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:05 crc kubenswrapper[5120]: I0122 12:53:05.138771 5120 scope.go:117] "RemoveContainer" containerID="93255bc069317c1b98c7e5d464d634946dfb59ed2823b2a9ae9c562272242064" Jan 22 12:53:16 crc kubenswrapper[5120]: I0122 12:53:16.572579 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:16 crc kubenswrapper[5120]: E0122 12:53:16.578080 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:29 crc kubenswrapper[5120]: I0122 12:53:29.572159 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:29 crc kubenswrapper[5120]: E0122 12:53:29.574505 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:42 crc kubenswrapper[5120]: I0122 12:53:42.571730 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:42 crc kubenswrapper[5120]: E0122 12:53:42.572853 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.215672 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216817 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-content" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216835 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-content" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216864 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-utilities" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216872 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="extract-utilities" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216888 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.216896 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.217287 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c0d5290-04f4-4490-ad2f-54d0bf67056d" containerName="registry-server" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.241995 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.242164 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.389737 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.389841 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.389987 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491158 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491238 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491273 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.491783 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.492076 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.513378 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"certified-operators-ttj2q\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:46 crc kubenswrapper[5120]: I0122 12:53:46.568416 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.018098 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.740496 5120 generic.go:358] "Generic (PLEG): container finished" podID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerID="5833dc3ae185a65e097352702074f280ed152077bf58cb97b99c78c2346ec892" exitCode=0 Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.741235 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"5833dc3ae185a65e097352702074f280ed152077bf58cb97b99c78c2346ec892"} Jan 22 12:53:47 crc kubenswrapper[5120]: I0122 12:53:47.741302 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerStarted","Data":"bbe5e653076d109859ae55de05148dcacbdc0b4bbccf2a0b9c171c70f3e3127a"} Jan 22 12:53:49 crc kubenswrapper[5120]: I0122 12:53:49.765381 5120 generic.go:358] "Generic (PLEG): container finished" podID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerID="e6840dec376324ececd4e016ce8b48e6a676dc885832cc587472e901e7d2908f" exitCode=0 Jan 22 12:53:49 crc kubenswrapper[5120]: I0122 12:53:49.765517 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"e6840dec376324ececd4e016ce8b48e6a676dc885832cc587472e901e7d2908f"} Jan 22 12:53:50 crc kubenswrapper[5120]: I0122 12:53:50.776779 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerStarted","Data":"2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0"} Jan 22 12:53:50 crc kubenswrapper[5120]: I0122 12:53:50.800803 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ttj2q" podStartSLOduration=3.972924517 podStartE2EDuration="4.800786861s" podCreationTimestamp="2026-01-22 12:53:46 +0000 UTC" firstStartedPulling="2026-01-22 12:53:47.743551792 +0000 UTC m=+3962.487500173" lastFinishedPulling="2026-01-22 12:53:48.571414146 +0000 UTC m=+3963.315362517" observedRunningTime="2026-01-22 12:53:50.797582427 +0000 UTC m=+3965.541530768" watchObservedRunningTime="2026-01-22 12:53:50.800786861 +0000 UTC m=+3965.544735192" Jan 22 12:53:53 crc kubenswrapper[5120]: I0122 12:53:53.572759 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:53:53 crc kubenswrapper[5120]: E0122 12:53:53.573634 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.569044 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.569491 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.630636 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.903674 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:53:56 crc kubenswrapper[5120]: I0122 12:53:56.974842 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:53:58 crc kubenswrapper[5120]: I0122 12:53:58.842452 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ttj2q" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" containerID="cri-o://2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0" gracePeriod=2 Jan 22 12:53:59 crc kubenswrapper[5120]: I0122 12:53:59.873454 5120 generic.go:358] "Generic (PLEG): container finished" podID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerID="2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0" exitCode=0 Jan 22 12:53:59 crc kubenswrapper[5120]: I0122 12:53:59.873599 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0"} Jan 22 12:53:59 crc kubenswrapper[5120]: I0122 12:53:59.952149 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.047754 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") pod \"b22aea1c-2669-424a-8776-4b9474da6cc6\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.047875 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") pod \"b22aea1c-2669-424a-8776-4b9474da6cc6\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.047908 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") pod \"b22aea1c-2669-424a-8776-4b9474da6cc6\" (UID: \"b22aea1c-2669-424a-8776-4b9474da6cc6\") " Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.049637 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities" (OuterVolumeSpecName: "utilities") pod "b22aea1c-2669-424a-8776-4b9474da6cc6" (UID: "b22aea1c-2669-424a-8776-4b9474da6cc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.064062 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd" (OuterVolumeSpecName: "kube-api-access-547jd") pod "b22aea1c-2669-424a-8776-4b9474da6cc6" (UID: "b22aea1c-2669-424a-8776-4b9474da6cc6"). InnerVolumeSpecName "kube-api-access-547jd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.102834 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b22aea1c-2669-424a-8776-4b9474da6cc6" (UID: "b22aea1c-2669-424a-8776-4b9474da6cc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.138778 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139769 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139797 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139810 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-utilities" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139817 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-utilities" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139839 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-content" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.139846 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="extract-content" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.140019 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" containerName="registry-server" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.154235 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-547jd\" (UniqueName: \"kubernetes.io/projected/b22aea1c-2669-424a-8776-4b9474da6cc6-kube-api-access-547jd\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.154294 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.154310 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b22aea1c-2669-424a-8776-4b9474da6cc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.192005 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.192711 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.196451 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.196967 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.200064 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.255873 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"auto-csr-approver-29484774-q5l42\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.358359 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"auto-csr-approver-29484774-q5l42\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.383623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"auto-csr-approver-29484774-q5l42\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.525321 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.885971 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttj2q" event={"ID":"b22aea1c-2669-424a-8776-4b9474da6cc6","Type":"ContainerDied","Data":"bbe5e653076d109859ae55de05148dcacbdc0b4bbccf2a0b9c171c70f3e3127a"} Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.886760 5120 scope.go:117] "RemoveContainer" containerID="2965a4added7097db620f58cb08c1a933a5bebaa440e5044ad9affa00812c8e0" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.886372 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttj2q" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.920533 5120 scope.go:117] "RemoveContainer" containerID="e6840dec376324ececd4e016ce8b48e6a676dc885832cc587472e901e7d2908f" Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.942838 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.951023 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ttj2q"] Jan 22 12:54:00 crc kubenswrapper[5120]: I0122 12:54:00.953433 5120 scope.go:117] "RemoveContainer" containerID="5833dc3ae185a65e097352702074f280ed152077bf58cb97b99c78c2346ec892" Jan 22 12:54:01 crc kubenswrapper[5120]: I0122 12:54:01.006991 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 12:54:01 crc kubenswrapper[5120]: W0122 12:54:01.016678 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e09f33_037b_4402_b891_c7d84dca4e0c.slice/crio-c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d WatchSource:0}: Error finding container c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d: Status 404 returned error can't find the container with id c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d Jan 22 12:54:01 crc kubenswrapper[5120]: I0122 12:54:01.595561 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22aea1c-2669-424a-8776-4b9474da6cc6" path="/var/lib/kubelet/pods/b22aea1c-2669-424a-8776-4b9474da6cc6/volumes" Jan 22 12:54:01 crc kubenswrapper[5120]: I0122 12:54:01.899905 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484774-q5l42" event={"ID":"38e09f33-037b-4402-b891-c7d84dca4e0c","Type":"ContainerStarted","Data":"c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d"} Jan 22 12:54:02 crc kubenswrapper[5120]: I0122 12:54:02.920289 5120 generic.go:358] "Generic (PLEG): container finished" podID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerID="6c22ec5cf52431656565b52791c399038ffbf4be2b60a8f90c1423eff5eb1f04" exitCode=0 Jan 22 12:54:02 crc kubenswrapper[5120]: I0122 12:54:02.920847 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484774-q5l42" event={"ID":"38e09f33-037b-4402-b891-c7d84dca4e0c","Type":"ContainerDied","Data":"6c22ec5cf52431656565b52791c399038ffbf4be2b60a8f90c1423eff5eb1f04"} Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.183362 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.326067 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") pod \"38e09f33-037b-4402-b891-c7d84dca4e0c\" (UID: \"38e09f33-037b-4402-b891-c7d84dca4e0c\") " Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.334904 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9" (OuterVolumeSpecName: "kube-api-access-zmqq9") pod "38e09f33-037b-4402-b891-c7d84dca4e0c" (UID: "38e09f33-037b-4402-b891-c7d84dca4e0c"). InnerVolumeSpecName "kube-api-access-zmqq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.427623 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zmqq9\" (UniqueName: \"kubernetes.io/projected/38e09f33-037b-4402-b891-c7d84dca4e0c-kube-api-access-zmqq9\") on node \"crc\" DevicePath \"\"" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.942621 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484774-q5l42" event={"ID":"38e09f33-037b-4402-b891-c7d84dca4e0c","Type":"ContainerDied","Data":"c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d"} Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.942716 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62806bc2e287cc16cb3e3310fc4518bf6ef52d7986550d785abeb8abb82cf5d" Jan 22 12:54:04 crc kubenswrapper[5120]: I0122 12:54:04.943162 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484774-q5l42" Jan 22 12:54:05 crc kubenswrapper[5120]: I0122 12:54:05.271569 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:54:05 crc kubenswrapper[5120]: I0122 12:54:05.283788 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484768-cfmpc"] Jan 22 12:54:05 crc kubenswrapper[5120]: I0122 12:54:05.605413 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1196931-91a2-4869-bff6-80785ee0ed43" path="/var/lib/kubelet/pods/f1196931-91a2-4869-bff6-80785ee0ed43/volumes" Jan 22 12:54:06 crc kubenswrapper[5120]: I0122 12:54:06.572256 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:06 crc kubenswrapper[5120]: E0122 12:54:06.572422 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:17 crc kubenswrapper[5120]: I0122 12:54:17.572533 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:17 crc kubenswrapper[5120]: E0122 12:54:17.573536 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:31 crc kubenswrapper[5120]: I0122 12:54:31.572947 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:31 crc kubenswrapper[5120]: E0122 12:54:31.574306 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:45 crc kubenswrapper[5120]: I0122 12:54:45.598136 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:45 crc kubenswrapper[5120]: E0122 12:54:45.599189 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:54:56 crc kubenswrapper[5120]: I0122 12:54:56.571739 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:54:56 crc kubenswrapper[5120]: E0122 12:54:56.572409 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:05 crc kubenswrapper[5120]: I0122 12:55:05.314023 5120 scope.go:117] "RemoveContainer" containerID="daf41329d180dcc37fb3f371cdaf516e4d7ff24c8288949d26f7303b4e826d13" Jan 22 12:55:10 crc kubenswrapper[5120]: I0122 12:55:10.573635 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:10 crc kubenswrapper[5120]: E0122 12:55:10.575200 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:22 crc kubenswrapper[5120]: I0122 12:55:22.571560 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:22 crc kubenswrapper[5120]: E0122 12:55:22.572720 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:34 crc kubenswrapper[5120]: I0122 12:55:34.571785 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:34 crc kubenswrapper[5120]: E0122 12:55:34.573491 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:55:49 crc kubenswrapper[5120]: I0122 12:55:49.572693 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:55:49 crc kubenswrapper[5120]: E0122 12:55:49.574089 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.154126 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.155302 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerName="oc" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.155315 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerName="oc" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.155474 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" containerName="oc" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.164845 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.164942 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.167699 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.167916 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.170248 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.279548 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"auto-csr-approver-29484776-zrrxj\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.380928 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"auto-csr-approver-29484776-zrrxj\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.427064 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"auto-csr-approver-29484776-zrrxj\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.488447 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.771807 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 12:56:00 crc kubenswrapper[5120]: I0122 12:56:00.778074 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 12:56:01 crc kubenswrapper[5120]: I0122 12:56:01.776725 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" event={"ID":"e601162d-810b-4cd9-a558-08f4b76f1234","Type":"ContainerStarted","Data":"b3a0e2ea55a0a8efc20b51dc01b267f89c5baae8b0090a70cfd3f5b54cbdf783"} Jan 22 12:56:02 crc kubenswrapper[5120]: I0122 12:56:02.788894 5120 generic.go:358] "Generic (PLEG): container finished" podID="e601162d-810b-4cd9-a558-08f4b76f1234" containerID="7512fad5ec10f0c7660abd2dd1ea5030ac807aecd713cb9dae496f30a411cff4" exitCode=0 Jan 22 12:56:02 crc kubenswrapper[5120]: I0122 12:56:02.788993 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" event={"ID":"e601162d-810b-4cd9-a558-08f4b76f1234","Type":"ContainerDied","Data":"7512fad5ec10f0c7660abd2dd1ea5030ac807aecd713cb9dae496f30a411cff4"} Jan 22 12:56:03 crc kubenswrapper[5120]: I0122 12:56:03.572268 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:03 crc kubenswrapper[5120]: E0122 12:56:03.572952 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.194566 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.282522 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") pod \"e601162d-810b-4cd9-a558-08f4b76f1234\" (UID: \"e601162d-810b-4cd9-a558-08f4b76f1234\") " Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.300284 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6" (OuterVolumeSpecName: "kube-api-access-l22p6") pod "e601162d-810b-4cd9-a558-08f4b76f1234" (UID: "e601162d-810b-4cd9-a558-08f4b76f1234"). InnerVolumeSpecName "kube-api-access-l22p6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.384290 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l22p6\" (UniqueName: \"kubernetes.io/projected/e601162d-810b-4cd9-a558-08f4b76f1234-kube-api-access-l22p6\") on node \"crc\" DevicePath \"\"" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.814327 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" event={"ID":"e601162d-810b-4cd9-a558-08f4b76f1234","Type":"ContainerDied","Data":"b3a0e2ea55a0a8efc20b51dc01b267f89c5baae8b0090a70cfd3f5b54cbdf783"} Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.814379 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a0e2ea55a0a8efc20b51dc01b267f89c5baae8b0090a70cfd3f5b54cbdf783" Jan 22 12:56:04 crc kubenswrapper[5120]: I0122 12:56:04.814463 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484776-zrrxj" Jan 22 12:56:05 crc kubenswrapper[5120]: I0122 12:56:05.277776 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:56:05 crc kubenswrapper[5120]: I0122 12:56:05.290507 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484770-td669"] Jan 22 12:56:05 crc kubenswrapper[5120]: I0122 12:56:05.588402 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730d9559-f767-44f0-9346-cfba60c8f1b5" path="/var/lib/kubelet/pods/730d9559-f767-44f0-9346-cfba60c8f1b5/volumes" Jan 22 12:56:17 crc kubenswrapper[5120]: I0122 12:56:17.572481 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:17 crc kubenswrapper[5120]: E0122 12:56:17.574559 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:30 crc kubenswrapper[5120]: I0122 12:56:30.574791 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:30 crc kubenswrapper[5120]: E0122 12:56:30.575945 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:44 crc kubenswrapper[5120]: I0122 12:56:44.572437 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:44 crc kubenswrapper[5120]: E0122 12:56:44.573521 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:56:55 crc kubenswrapper[5120]: I0122 12:56:55.584068 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:56:55 crc kubenswrapper[5120]: E0122 12:56:55.585028 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:05 crc kubenswrapper[5120]: I0122 12:57:05.520681 5120 scope.go:117] "RemoveContainer" containerID="727bb28f7a024f28e2f883ea6ba608737fc5ddb620fdace8b333e8edb2713483" Jan 22 12:57:06 crc kubenswrapper[5120]: I0122 12:57:06.572678 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:06 crc kubenswrapper[5120]: E0122 12:57:06.573197 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:17 crc kubenswrapper[5120]: I0122 12:57:17.572387 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:17 crc kubenswrapper[5120]: E0122 12:57:17.573507 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.898328 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.903381 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" containerName="oc" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.903449 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" containerName="oc" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.903857 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" containerName="oc" Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.917330 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:29 crc kubenswrapper[5120]: I0122 12:57:29.917502 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.003388 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.003709 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.003814 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.105551 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.105603 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.105637 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.106549 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.106841 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.135701 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"community-operators-n6jng\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.249851 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.735702 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:30 crc kubenswrapper[5120]: E0122 12:57:30.736434 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:30 crc kubenswrapper[5120]: I0122 12:57:30.821480 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:30 crc kubenswrapper[5120]: W0122 12:57:30.827091 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73 WatchSource:0}: Error finding container 5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73: Status 404 returned error can't find the container with id 5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73 Jan 22 12:57:31 crc kubenswrapper[5120]: I0122 12:57:31.753401 5120 generic.go:358] "Generic (PLEG): container finished" podID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerID="d11b1c33635db296f3ff86c7092cf2072594be7992b8de2d62c687c16eab374e" exitCode=0 Jan 22 12:57:31 crc kubenswrapper[5120]: I0122 12:57:31.753476 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"d11b1c33635db296f3ff86c7092cf2072594be7992b8de2d62c687c16eab374e"} Jan 22 12:57:31 crc kubenswrapper[5120]: I0122 12:57:31.753894 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerStarted","Data":"5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73"} Jan 22 12:57:33 crc kubenswrapper[5120]: I0122 12:57:33.771437 5120 generic.go:358] "Generic (PLEG): container finished" podID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerID="aa18c7b92186cce5d553621916e8d46cdbd7e8fab98ca62507a976ffa85e7597" exitCode=0 Jan 22 12:57:33 crc kubenswrapper[5120]: I0122 12:57:33.771540 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"aa18c7b92186cce5d553621916e8d46cdbd7e8fab98ca62507a976ffa85e7597"} Jan 22 12:57:34 crc kubenswrapper[5120]: I0122 12:57:34.784161 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerStarted","Data":"174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846"} Jan 22 12:57:34 crc kubenswrapper[5120]: I0122 12:57:34.813598 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n6jng" podStartSLOduration=4.76418549 podStartE2EDuration="5.813580869s" podCreationTimestamp="2026-01-22 12:57:29 +0000 UTC" firstStartedPulling="2026-01-22 12:57:31.754320142 +0000 UTC m=+4186.498268483" lastFinishedPulling="2026-01-22 12:57:32.803715521 +0000 UTC m=+4187.547663862" observedRunningTime="2026-01-22 12:57:34.810837135 +0000 UTC m=+4189.554785516" watchObservedRunningTime="2026-01-22 12:57:34.813580869 +0000 UTC m=+4189.557529210" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.251146 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.252041 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.320710 5120 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:40 crc kubenswrapper[5120]: I0122 12:57:40.894426 5120 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:41 crc kubenswrapper[5120]: I0122 12:57:41.948338 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:43 crc kubenswrapper[5120]: I0122 12:57:43.876726 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n6jng" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" containerID="cri-o://174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846" gracePeriod=2 Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.573481 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:44 crc kubenswrapper[5120]: E0122 12:57:44.573864 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.885724 5120 generic.go:358] "Generic (PLEG): container finished" podID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerID="174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846" exitCode=0 Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.885823 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"174676accbb49bbfb77b4a5641602a02c2948a360f0936c6dfd07cff74411846"} Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.886272 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n6jng" event={"ID":"81789982-6ef2-4e7d-ab11-33380f68aad4","Type":"ContainerDied","Data":"5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73"} Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.886300 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.906087 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.914173 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") pod \"81789982-6ef2-4e7d-ab11-33380f68aad4\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.914336 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") pod \"81789982-6ef2-4e7d-ab11-33380f68aad4\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.914370 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") pod \"81789982-6ef2-4e7d-ab11-33380f68aad4\" (UID: \"81789982-6ef2-4e7d-ab11-33380f68aad4\") " Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.915405 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities" (OuterVolumeSpecName: "utilities") pod "81789982-6ef2-4e7d-ab11-33380f68aad4" (UID: "81789982-6ef2-4e7d-ab11-33380f68aad4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.920662 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh" (OuterVolumeSpecName: "kube-api-access-8vkbh") pod "81789982-6ef2-4e7d-ab11-33380f68aad4" (UID: "81789982-6ef2-4e7d-ab11-33380f68aad4"). InnerVolumeSpecName "kube-api-access-8vkbh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:57:44 crc kubenswrapper[5120]: I0122 12:57:44.963223 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81789982-6ef2-4e7d-ab11-33380f68aad4" (UID: "81789982-6ef2-4e7d-ab11-33380f68aad4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.015335 5120 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.015375 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vkbh\" (UniqueName: \"kubernetes.io/projected/81789982-6ef2-4e7d-ab11-33380f68aad4-kube-api-access-8vkbh\") on node \"crc\" DevicePath \"\"" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.015387 5120 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81789982-6ef2-4e7d-ab11-33380f68aad4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.895169 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n6jng" Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.949137 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:45 crc kubenswrapper[5120]: I0122 12:57:45.968809 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n6jng"] Jan 22 12:57:47 crc kubenswrapper[5120]: I0122 12:57:47.589736 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" path="/var/lib/kubelet/pods/81789982-6ef2-4e7d-ab11-33380f68aad4/volumes" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.501539 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.501587 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.512399 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:57:52 crc kubenswrapper[5120]: I0122 12:57:52.512665 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 12:57:55 crc kubenswrapper[5120]: E0122 12:57:55.006083 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:57:58 crc kubenswrapper[5120]: I0122 12:57:58.572680 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:57:58 crc kubenswrapper[5120]: E0122 12:57:58.573693 5120 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dq269_openshift-machine-config-operator(90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9)\"" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.150316 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484778-mmgrb"] Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.151939 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-content" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152015 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-content" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152041 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152052 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152076 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-utilities" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152086 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="extract-utilities" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.152303 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="81789982-6ef2-4e7d-ab11-33380f68aad4" containerName="registry-server" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.161523 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484778-mmgrb"] Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.161714 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.176575 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.176879 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.176921 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.305407 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"auto-csr-approver-29484778-mmgrb\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.407403 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"auto-csr-approver-29484778-mmgrb\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.438536 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"auto-csr-approver-29484778-mmgrb\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.496314 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:00 crc kubenswrapper[5120]: I0122 12:58:00.772841 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484778-mmgrb"] Jan 22 12:58:01 crc kubenswrapper[5120]: I0122 12:58:01.044020 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerStarted","Data":"94a7aa94b071baad1c3ed51dc4ebffd6aca5380b8a5f5f92f4cafc553d9ddfcc"} Jan 22 12:58:02 crc kubenswrapper[5120]: I0122 12:58:02.053439 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerStarted","Data":"4af10899900eaff979fb0e1ea3a74a61d71ea4e0ba8e793e1134b58112a66e1e"} Jan 22 12:58:02 crc kubenswrapper[5120]: I0122 12:58:02.071907 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" podStartSLOduration=1.1845751 podStartE2EDuration="2.071863026s" podCreationTimestamp="2026-01-22 12:58:00 +0000 UTC" firstStartedPulling="2026-01-22 12:58:00.773807001 +0000 UTC m=+4215.517755362" lastFinishedPulling="2026-01-22 12:58:01.661094937 +0000 UTC m=+4216.405043288" observedRunningTime="2026-01-22 12:58:02.067614627 +0000 UTC m=+4216.811562998" watchObservedRunningTime="2026-01-22 12:58:02.071863026 +0000 UTC m=+4216.815811387" Jan 22 12:58:03 crc kubenswrapper[5120]: I0122 12:58:03.064351 5120 generic.go:358] "Generic (PLEG): container finished" podID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerID="4af10899900eaff979fb0e1ea3a74a61d71ea4e0ba8e793e1134b58112a66e1e" exitCode=0 Jan 22 12:58:03 crc kubenswrapper[5120]: I0122 12:58:03.064479 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerDied","Data":"4af10899900eaff979fb0e1ea3a74a61d71ea4e0ba8e793e1134b58112a66e1e"} Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.437221 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.585455 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") pod \"bf95f5c6-016d-4a27-b836-07355b8fe40c\" (UID: \"bf95f5c6-016d-4a27-b836-07355b8fe40c\") " Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.595217 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx" (OuterVolumeSpecName: "kube-api-access-sjjnx") pod "bf95f5c6-016d-4a27-b836-07355b8fe40c" (UID: "bf95f5c6-016d-4a27-b836-07355b8fe40c"). InnerVolumeSpecName "kube-api-access-sjjnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 12:58:04 crc kubenswrapper[5120]: I0122 12:58:04.688546 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjjnx\" (UniqueName: \"kubernetes.io/projected/bf95f5c6-016d-4a27-b836-07355b8fe40c-kube-api-access-sjjnx\") on node \"crc\" DevicePath \"\"" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.084460 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" event={"ID":"bf95f5c6-016d-4a27-b836-07355b8fe40c","Type":"ContainerDied","Data":"94a7aa94b071baad1c3ed51dc4ebffd6aca5380b8a5f5f92f4cafc553d9ddfcc"} Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.084523 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a7aa94b071baad1c3ed51dc4ebffd6aca5380b8a5f5f92f4cafc553d9ddfcc" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.084612 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484778-mmgrb" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.132399 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.141717 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484772-rwp4t"] Jan 22 12:58:05 crc kubenswrapper[5120]: E0122 12:58:05.218583 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.596934 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813b78c4-6644-444f-baa4-af92c9a1bfd0" path="/var/lib/kubelet/pods/813b78c4-6644-444f-baa4-af92c9a1bfd0/volumes" Jan 22 12:58:05 crc kubenswrapper[5120]: I0122 12:58:05.664916 5120 scope.go:117] "RemoveContainer" containerID="ff8af05f7b27c4b094ab8e8f34a856e723d09850f96dc8e0d652385ae56780a8" Jan 22 12:58:12 crc kubenswrapper[5120]: I0122 12:58:12.572065 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 12:58:13 crc kubenswrapper[5120]: I0122 12:58:13.168108 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed"} Jan 22 12:58:15 crc kubenswrapper[5120]: E0122 12:58:15.407630 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache]" Jan 22 12:58:25 crc kubenswrapper[5120]: E0122 12:58:25.627625 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 12:58:35 crc kubenswrapper[5120]: E0122 12:58:35.816915 5120 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice/crio-5a27ea58014f7d2270e7562465b4b2a4c0c1318b0597d86739d5af21de484a73\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81789982_6ef2_4e7d_ab11_33380f68aad4.slice\": RecentStats: unable to find data in memory cache]" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.138679 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484780-r76g8"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.140793 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerName="oc" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.140822 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerName="oc" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.141065 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="bf95f5c6-016d-4a27-b836-07355b8fe40c" containerName="oc" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.157211 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.157402 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.159375 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.160104 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.160682 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.165232 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484780-r76g8"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.165277 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw"] Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.165413 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.167446 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.167682 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.251722 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.251787 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.251903 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"auto-csr-approver-29484780-r76g8\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.252049 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353622 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353788 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"auto-csr-approver-29484780-r76g8\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353846 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.353888 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.356088 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.374226 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.376503 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"collect-profiles-29484780-9vhhw\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.376623 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"auto-csr-approver-29484780-r76g8\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.492157 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.501947 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:00 crc kubenswrapper[5120]: I0122 13:00:00.783342 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw"] Jan 22 13:00:01 crc kubenswrapper[5120]: W0122 13:00:01.055939 5120 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e0c82bd_3880_4a7b_98d0_751c23215e35.slice/crio-42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122 WatchSource:0}: Error finding container 42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122: Status 404 returned error can't find the container with id 42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122 Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.057857 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484780-r76g8"] Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.541683 5120 generic.go:358] "Generic (PLEG): container finished" podID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerID="95f8510cf745c21585132dbafc647672d844d28bb93b1ec91530a1a9f1b4139f" exitCode=0 Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.541744 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" event={"ID":"a0e2a0ec-867a-47b2-b6f5-7586c07979e8","Type":"ContainerDied","Data":"95f8510cf745c21585132dbafc647672d844d28bb93b1ec91530a1a9f1b4139f"} Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.541817 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" event={"ID":"a0e2a0ec-867a-47b2-b6f5-7586c07979e8","Type":"ContainerStarted","Data":"d3aa07de6324edacad36defa170f4a5fe9f6dddd7a9acb0e6d6dbea04e0b82e3"} Jan 22 13:00:01 crc kubenswrapper[5120]: I0122 13:00:01.543768 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerStarted","Data":"42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122"} Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.797970 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.904612 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") pod \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.904746 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") pod \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.905968 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") pod \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\" (UID: \"a0e2a0ec-867a-47b2-b6f5-7586c07979e8\") " Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.906792 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0e2a0ec-867a-47b2-b6f5-7586c07979e8" (UID: "a0e2a0ec-867a-47b2-b6f5-7586c07979e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.910456 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0e2a0ec-867a-47b2-b6f5-7586c07979e8" (UID: "a0e2a0ec-867a-47b2-b6f5-7586c07979e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 13:00:02 crc kubenswrapper[5120]: I0122 13:00:02.910845 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258" (OuterVolumeSpecName: "kube-api-access-h7258") pod "a0e2a0ec-867a-47b2-b6f5-7586c07979e8" (UID: "a0e2a0ec-867a-47b2-b6f5-7586c07979e8"). InnerVolumeSpecName "kube-api-access-h7258". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.007908 5120 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.007970 5120 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.007988 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h7258\" (UniqueName: \"kubernetes.io/projected/a0e2a0ec-867a-47b2-b6f5-7586c07979e8-kube-api-access-h7258\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.566657 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" event={"ID":"a0e2a0ec-867a-47b2-b6f5-7586c07979e8","Type":"ContainerDied","Data":"d3aa07de6324edacad36defa170f4a5fe9f6dddd7a9acb0e6d6dbea04e0b82e3"} Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.566709 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3aa07de6324edacad36defa170f4a5fe9f6dddd7a9acb0e6d6dbea04e0b82e3" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.566832 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484780-9vhhw" Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.875746 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 13:00:03 crc kubenswrapper[5120]: I0122 13:00:03.880713 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484735-6dctk"] Jan 22 13:00:05 crc kubenswrapper[5120]: I0122 13:00:05.590660 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5445dd15-192f-4528-92eb-f9507eb342c4" path="/var/lib/kubelet/pods/5445dd15-192f-4528-92eb-f9507eb342c4/volumes" Jan 22 13:00:05 crc kubenswrapper[5120]: I0122 13:00:05.861720 5120 scope.go:117] "RemoveContainer" containerID="21cb135b3d3bfb01aa6f0319bccbb82d56dd92e0a9f8f4fb24aad8d3347005ef" Jan 22 13:00:19 crc kubenswrapper[5120]: I0122 13:00:19.748031 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerStarted","Data":"fb871e2771aac457fc81b9af984e1983ca2af0cd30d6e3db47d021e8e567453b"} Jan 22 13:00:19 crc kubenswrapper[5120]: I0122 13:00:19.768999 5120 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29484780-r76g8" podStartSLOduration=1.656337352 podStartE2EDuration="19.768983759s" podCreationTimestamp="2026-01-22 13:00:00 +0000 UTC" firstStartedPulling="2026-01-22 13:00:01.058180164 +0000 UTC m=+4335.802128555" lastFinishedPulling="2026-01-22 13:00:19.170826621 +0000 UTC m=+4353.914774962" observedRunningTime="2026-01-22 13:00:19.763627274 +0000 UTC m=+4354.507575625" watchObservedRunningTime="2026-01-22 13:00:19.768983759 +0000 UTC m=+4354.512932100" Jan 22 13:00:20 crc kubenswrapper[5120]: I0122 13:00:20.759469 5120 generic.go:358] "Generic (PLEG): container finished" podID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerID="fb871e2771aac457fc81b9af984e1983ca2af0cd30d6e3db47d021e8e567453b" exitCode=0 Jan 22 13:00:20 crc kubenswrapper[5120]: I0122 13:00:20.759567 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerDied","Data":"fb871e2771aac457fc81b9af984e1983ca2af0cd30d6e3db47d021e8e567453b"} Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.144786 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.217488 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") pod \"7e0c82bd-3880-4a7b-98d0-751c23215e35\" (UID: \"7e0c82bd-3880-4a7b-98d0-751c23215e35\") " Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.222641 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp" (OuterVolumeSpecName: "kube-api-access-qgzlp") pod "7e0c82bd-3880-4a7b-98d0-751c23215e35" (UID: "7e0c82bd-3880-4a7b-98d0-751c23215e35"). InnerVolumeSpecName "kube-api-access-qgzlp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.320084 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgzlp\" (UniqueName: \"kubernetes.io/projected/7e0c82bd-3880-4a7b-98d0-751c23215e35-kube-api-access-qgzlp\") on node \"crc\" DevicePath \"\"" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.782526 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484780-r76g8" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.782543 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484780-r76g8" event={"ID":"7e0c82bd-3880-4a7b-98d0-751c23215e35","Type":"ContainerDied","Data":"42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122"} Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.783454 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a76bb118c34a83e7f77d4a0df553f87a5e5d04453bfd7d9e61b77a34444122" Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.840916 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 13:00:22 crc kubenswrapper[5120]: I0122 13:00:22.845711 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484774-q5l42"] Jan 22 13:00:23 crc kubenswrapper[5120]: I0122 13:00:23.591024 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e09f33-037b-4402-b891-c7d84dca4e0c" path="/var/lib/kubelet/pods/38e09f33-037b-4402-b891-c7d84dca4e0c/volumes" Jan 22 13:00:32 crc kubenswrapper[5120]: I0122 13:00:32.113639 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:00:32 crc kubenswrapper[5120]: I0122 13:00:32.114215 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:01:01 crc kubenswrapper[5120]: I0122 13:01:01.973152 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:01:01 crc kubenswrapper[5120]: I0122 13:01:01.973865 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:01:05 crc kubenswrapper[5120]: I0122 13:01:05.923529 5120 scope.go:117] "RemoveContainer" containerID="6c22ec5cf52431656565b52791c399038ffbf4be2b60a8f90c1423eff5eb1f04" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.973317 5120 patch_prober.go:28] interesting pod/machine-config-daemon-dq269 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.974042 5120 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.974135 5120 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dq269" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.975052 5120 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed"} pod="openshift-machine-config-operator/machine-config-daemon-dq269" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:01:31 crc kubenswrapper[5120]: I0122 13:01:31.975140 5120 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dq269" podUID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerName="machine-config-daemon" containerID="cri-o://564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed" gracePeriod=600 Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.133838 5120 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680460 5120 generic.go:358] "Generic (PLEG): container finished" podID="90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9" containerID="564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed" exitCode=0 Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680558 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerDied","Data":"564566f88543c243d4bce411a2a81cdc20ab4dfa6edf69e38bfddd2aaa71b1ed"} Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680746 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dq269" event={"ID":"90c9e0b1-9c25-48fc-8aef-c587b5d6d8e9","Type":"ContainerStarted","Data":"66b3269c5b52320afd6538f2e9bbfc65ba479b93c17773ef46f5d4ccf54097d1"} Jan 22 13:01:32 crc kubenswrapper[5120]: I0122 13:01:32.680766 5120 scope.go:117] "RemoveContainer" containerID="dbf558918fffbef59164dd4f2880112da5ee7c772edfd9eec91c378b2021782e" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.163358 5120 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29484782-gb27c"] Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164903 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerName="oc" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164921 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerName="oc" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164952 5120 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerName="collect-profiles" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.164981 5120 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerName="collect-profiles" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.165154 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0e2a0ec-867a-47b2-b6f5-7586c07979e8" containerName="collect-profiles" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.165177 5120 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e0c82bd-3880-4a7b-98d0-751c23215e35" containerName="oc" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.177999 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484782-gb27c"] Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.178269 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.181590 5120 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"auto-csr-approver-29484782-gb27c\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.182522 5120 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-g2chw\"" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.182753 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.183147 5120 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.282460 5120 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"auto-csr-approver-29484782-gb27c\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.307716 5120 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"auto-csr-approver-29484782-gb27c\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.509375 5120 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.733228 5120 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29484782-gb27c"] Jan 22 13:02:00 crc kubenswrapper[5120]: I0122 13:02:00.962515 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484782-gb27c" event={"ID":"44bfe647-f6af-4128-a6d5-c44e07a88656","Type":"ContainerStarted","Data":"bd14c05b526ae6592318df0d9e6eb248265b02b7dfa6684d29464f62a25e86c1"} Jan 22 13:02:02 crc kubenswrapper[5120]: I0122 13:02:02.980276 5120 generic.go:358] "Generic (PLEG): container finished" podID="44bfe647-f6af-4128-a6d5-c44e07a88656" containerID="8c525e7f1b5c178020eb94d211f73512158ca45430ba0508756922f0a66a75f4" exitCode=0 Jan 22 13:02:02 crc kubenswrapper[5120]: I0122 13:02:02.980475 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484782-gb27c" event={"ID":"44bfe647-f6af-4128-a6d5-c44e07a88656","Type":"ContainerDied","Data":"8c525e7f1b5c178020eb94d211f73512158ca45430ba0508756922f0a66a75f4"} Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.273610 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.345595 5120 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") pod \"44bfe647-f6af-4128-a6d5-c44e07a88656\" (UID: \"44bfe647-f6af-4128-a6d5-c44e07a88656\") " Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.361053 5120 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t" (OuterVolumeSpecName: "kube-api-access-tnj7t") pod "44bfe647-f6af-4128-a6d5-c44e07a88656" (UID: "44bfe647-f6af-4128-a6d5-c44e07a88656"). InnerVolumeSpecName "kube-api-access-tnj7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 13:02:04 crc kubenswrapper[5120]: I0122 13:02:04.447428 5120 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tnj7t\" (UniqueName: \"kubernetes.io/projected/44bfe647-f6af-4128-a6d5-c44e07a88656-kube-api-access-tnj7t\") on node \"crc\" DevicePath \"\"" Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.002250 5120 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29484782-gb27c" event={"ID":"44bfe647-f6af-4128-a6d5-c44e07a88656","Type":"ContainerDied","Data":"bd14c05b526ae6592318df0d9e6eb248265b02b7dfa6684d29464f62a25e86c1"} Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.002948 5120 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd14c05b526ae6592318df0d9e6eb248265b02b7dfa6684d29464f62a25e86c1" Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.002471 5120 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29484782-gb27c" Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.337852 5120 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.342341 5120 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29484776-zrrxj"] Jan 22 13:02:05 crc kubenswrapper[5120]: I0122 13:02:05.587825 5120 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e601162d-810b-4cd9-a558-08f4b76f1234" path="/var/lib/kubelet/pods/e601162d-810b-4cd9-a558-08f4b76f1234/volumes" Jan 22 13:02:06 crc kubenswrapper[5120]: I0122 13:02:06.081789 5120 scope.go:117] "RemoveContainer" containerID="7512fad5ec10f0c7660abd2dd1ea5030ac807aecd713cb9dae496f30a411cff4" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.642555 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.647515 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4lzht_67eb0b85-4fb2-4c18-a78b-e2eeaa4d2087/kube-multus/0.log" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.658885 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 22 13:02:52 crc kubenswrapper[5120]: I0122 13:02:52.662598 5120 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log"